Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Non-det failure of run_equivocator_network test.
https://drone-auto-casper-network.casperlabs.io/casper-network/casper-node/1453/3/3 failed with
failures:
---- reactor::participating::tests::run_equivocator_network stdout ----
thread 'reactor::participating::tests::run_equivocator_network' panicked at 'assertion failed: `(left == right)`
left: `[]`,
right: `[PublicKey::Ed25519(5bc5542a73a3fd4e53c2f3c84838156cf8ce5de0db828a5644b1c6d883056aa6)]`', node/src/reactor/participating/tests.rs:319:13
source:
https://github.com/casper-network/casper-node/blob/dev/node/src/reactor/participating/tests.rs#L319
4.11.23 can we create equivocation through the diagnostic port? Goal: make this deterministic and then address. Use the equivocation mechanism available in Zug
As discussed, I can't reproduce this, but added more asserts to the test in #1862 in case it happens again.
Reopening since it happened again:
---- reactor::participating::tests::run_equivocator_network stdout ----
thread 'reactor::participating::tests::run_equivocator_network' panicked at 'Alice should have been evicted.', node/src/reactor/participating/tests.rs:304:5
==============================================================
Thread: reactor::participating::tests::run_equivocator_network
To reproduce failure, try running with env var:
CL_TEST_SEED=fd02296c9f70cb27da4531fd5207b884
==============================================================
I think the least invasive way to make Alice equivocate is to delay all messages to and from her clone by one round length: She'd then send a witness unit citing the first proposed block, and the clone should send one that doesn't. In the second round, the clone's messages are delivered and all nodes detect the equivocation.
The tests could be simplified to always expect an equivocation in the first era.
One way to implement this is to wrap the reactor in a new FilteringReactor that in dispatch_event applies a configurable filter
Box<dyn Fn(Event) -> Either<Effects<Event>, Event>
and either returns the effects directly or forwards to the inner reactor's dispatch_event.
In run_equivocator_network the filter would wrap all events containing IncomingMessage or NetworkRequest in
self.effect_builder.set_timeout(round_len).event(move |_| event)
to delay them.
I think the least invasive way to make Alice equivocate is to delay all messages to and from her clone by one round length: She'd then send a witness unit citing the first proposed block, and the clone should send one that doesn't. In the second round, the clone's messages are delivered and all nodes detect the equivocation.
The tests could be simplified to always expect an equivocation in the first era.
One way to implement this is to wrap the reactor in a new FilteringReactor that in dispatch_event applies a configurable filter
Box<dyn Fn(Event) -> Either<Effects<Event>, Event>
and either returns the effects directly or forwards to the inner reactor's dispatch_event.
In run_equivocator_network the filter would wrap all events containing IncomingMessage or NetworkRequest in
self.effect_builder.set_timeout(round_len).event(move |_| event)
to delay them.
Similar but a different approach would be to expose map/flat_map method on the Network itself that would forward the function to the map/flat_map methods on the Effect type.
Yes, Effects are futures, and you can't match on them.
There's already a map on Effect type so what I was proposing was adding a map on Network call call each individual Effect::map and pass in the function from the outer Network. That still wouldn't be equivalent to your proposal though.
Reopening, as it happened again in https://github.com/casper-network/casper-node/pull/1935: https://drone-auto-casper-network.casperlabs.io/casper-network/casper-node/1744/3/3
We doubled the test's era height in https://github.com/casper-network/casper-node/pull/1935, so nodes have more time to detect the equivocation.
Looks like that didn't work; it failed again.
Experienced this as well, just fyi: https://drone-auto-casper-network.casperlabs.io/casper-network/casper-node/1988
@goral09 is this fixed?
Closing this, until it shows up again.
And another one: https://drone-auto-casper-network.casperlabs.io/casper-network/casper-node/3082/3/3
This test is going to be temporarily disabled. It fails so often, that we tend to just retry the build and don't investigate if it uncovered a real problem.
@Fraser999 @piotr-dziubecki @EdHastingsCasperLabs
We think this test is very important, though, so please consider increasing priority of the ticket.
As discussed I will remove the #[ignore] and temporarily disable only the flaky assertion for now.
To give some more context about the test in general: This is meant to simulate an equivocation, by having Alice run two nodes, as well as inactivity (on feat-fast-sync-v2), by having Bob run none.
The difficulty is to make Alice's nodes actually double-sign something despite the precautions in the code that defend against just that: The test tries to keep the two nodes (both configured with Alice's secret key) separated until they both actually signed something without knowing about the other. To do that, it delays incoming and outgoing messages in one of her nodes. It also needs to recognize messages that actually can't be part of a pair of conflicting signatures, like Highway's Pings: These still need to be delayed, so that the nodes don't learn about each other, but they mustn't be taken as a sign that the desired equivocation has been achieved.
Equivocation is possible:
In Highway by signing two different Units with the same sequence number.
In Zug by signing two Echos with a different hash, or both a Vote(true) and Vote(false), in the same round.
Maybe add a bounty for such time-intensive/frustrating/sporadic bugs.
This may be addressed now, depending on how #4551 shakes out. If that makes it into dev and no more failures occur, we can probably consider this addressed.
Potentially relevant commits:
92dcb83a9d2473a01f7bd318104fa6af2e66294a
d89dc2581dfd38cf24ab7383e8716e5140e9c9e2
bdbf2dfe1a32878d7a8cb26730fb9ce088e785a7
|
GITHUB_ARCHIVE
|
Can the default constructor of std::list<int> throw?
I had a (quick) look into the C++ standard and into an online C++ reference, but I could not find an answer to this simple question:
Can the default constructor of std::list<int> throw?
If so, why would it throw?
Short answer: it can, but it may be implemented in a way that is reasonably safe:
The default constructor constructs an empty list, so there is little need to actually allocate memory in the process. Most list implementations won't allocate any memory for an empty list.
However, the default constructor is not really a default constructor, since it has a defaulted argument: explicit list(const Allocator& = Allocator());
Allocator itself is a template argument, so the call of the constructor already might throw, if Allocator has a sufficiently dumb (or complex) implementation providing a throwing default constructor, i.e. if the construction of the default argument throws.
If the default constructor of Allocator does not throw, it is realtively easy to provide an implementation of std::list whose default constructor won't throw either. But library implementors are not required to do so.
Updated: The list has to store a copy of the given allocator to be able to call it later. Contrary to my prior claim, the resulting call to the copy constructor of Allocator may not throw (§<IP_ADDRESS>, see the comments). The list implementation is also not allowed to e.g. default-construct the allocator and do a copy assignment in the constructor, because that would break any code that tries to use the list with allocators that are not default-constructible.
The default constructor of Allocator may throw, but not its copy constructor (forbidden somewhere in the standard).
"(...) a call to the copy constructor of Allocator (...), which might also throw.": actually, according to N3337 <IP_ADDRESS> [allocator.requirements] Table 28: X a1(a); "Shall not exit via an exception." (I guess that's what @MarcGlisse's comment is referring to) (I don't see such a requirement for the default constructor nor assignment, however, only for copy/move ctors). Also, I have rather heard of "sentinel node" allocation as a potential cause of exception for example when move-constructing a list.
Thanks for the hint on the copy ctor of Allocator
The C++11 standard declares the list's default constructor as
explicit list(const Allocator& = Allocator());
, which does not include a noexcept. Thus, the standard implicitly allows it to throw exceptions.
It may potentially allocate space via new to create its internal structure, so yes, it may possibly throw.
I can imagine an implementation that wouldn't. But yeah, the Standard allows it.
Id does not need to allocate anything for an empty list. The standard does not specify how an "internal structure" has to be built.
|
STACK_EXCHANGE
|
Don't get too attached — this content made a permanent move to uniform.hudl.com. We'll kill everything below on April 16.
Buttons are an essential interaction element for interfaces. Hudl has a variety of button styles, types and sizes available for interface design. Here you will find documentation of our current suite of buttons.
We currently have two button styles available: Buttons (aka “standard buttons”) and Minimal Buttons.
Buttons are a standard and overt way to represent actions. Our standard buttons have background colors and rounded corners as affordances to look “clickable”.
Minimal Buttons are a subtler way to represent actions. Minimal Buttons can be a good option when standard buttons demand too much attention or when many actions exist in a single view.
Without the same affordances as standard buttons, context for Minimal Buttons is important to ensure the element appear interactive. This can be achieved by placing Minimal Buttons in an isolated space by using hairline dividers or white space.
We have six types of buttons available. Each button type has a specific purpose for use. As a general rule, hover states are a 25% tint and the active state is a 10% shade.
Primary Buttons are reserved for the strongest action on the view.
Primary Buttons are set in Electric for both regular button backgrounds and Minimal Button text.
The Secondary Button is a strong action but less strong than a primary action. Multiple Secondary Buttons can be used together and along side a primary action.
The regular Secondary Button background is set in Light Type at an 80% alpha transparency in both Light and Dark Environments. Minimal Secondary Button text is set in Light Type in the Light Environment and Bright Type in the Dark Environment.
The Subtle Button is an alternative to the Secondary Button when a Secondary Button would unintentionally dominate the interface. Multiple subtle buttons can be used together or along side a primary action.
The regular Subtle Button background is set in a custom grey (#9DA6AE) with a 20% alpha transparency in both Light and Dark Environments. However, the text is set in Light Type in the Light Environment and Bright Type in the dark environment.
The Minimal Subtle Button text is set in Subtle Type for both Light and Dark Environments.
The Destroy Button should only be used for destructive actions such as the final step before deleting information. The Destroy Button is very strong, demands attention and should be used with caution.
The Destroy Button hover for both regular button backgrounds and Minimal Button text is a custom red (#FF4A33).
The Success Button should only be used as a temporary state representing a successful action.
The regular Success Button background and Minimal Success Button text are set in Success (green). Note: White type on the Success color (green) does not pass accessibility standards.
The Disabled Button can be used to represent a possible button action that is currently unavailable do to the state of the view.
Inoperable buttons can cause frustration and should be used with caution. The Disabled Button should only be used when the existence of an inoperable button brings value by representing the unavailable use.
The regular Disabled Button text is set in Ink and background set in Nonessential (LE) with the entire button set at a 20% opacity. The Minimal Disabled Button text is set in Ink the Light Environment and White in the Dark Environment with a 20% opacity.
Hover and cursor changes should be removed from Disabled Buttons as they are interactive affordances.
Using the Cancel Button is the only time a Minimal Button should be grouped with a standard button. The Cancel Button should always be furthest left of a right justified button group and set in the same size as the other button(s) in the group.
The Cancel Button is only available as a Minimal Button. Cancel is also the only Medium or Small Button set in Regular weight type.
The Cancel Button is set in Nonessential (LE) in the Light Environment and Nonessential (DE) in the Dark Environment.
There are three sizes of buttons available: Small, Medium and Large. Appropriate size is determined by view density and the buttons position in the view hierarchy. Multiple sizes should never placed adjacent to one another in a group as associated actions.
Small Buttons are ideal for denser interfaces or when the button actions are secondary to page content. Small Buttons are set at 14px in a Bold typeface, top/bottom padding of 8px, left/right padding of 16px and have a 3px corner radius.
Medium Buttons are the default button size and should work for many cases. Medium Buttons are set at 16px in a Bold typeface, top/bottom padding of 12px, left/right padding of 20px and have a 4px corner radius.
Large Buttons are a strong call to action. They are ideal in sparse interfaces and very strong actions. Large Buttons are set at 18px in a Regular typeface, top/bottom padding of 18px, very generous left/right padding of 40px and have a 5px corner radius.
Buttons with Icons
Icons can be a useful tool to enhance understanding, create visual interest and improve scanability.
Regular Buttons with Icons
Icons in regular buttons are always positioned left of the button text, vertically centered with additional horizontal spacing.
Minimal Buttons with Icons
Icons in Minimal Buttons are positioned slightly above center relative to text.
In cases where visual efficiency is important and icon meaning is easily implied, icons can be used alone (without text) as a button.
Icon Buttons can have the same button affordances as regular buttons, but without the text label or be simply an icon with the Minimal Button style. Minimal Icon Buttons should be used carefully so the context (color, spacing, position in layout) promotes action.
A Sketch file with with UI elements used to design Hudl interfaces.
|
OPCFW_CODE
|
[docs] Restructure
Preview: https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/getting-started/first-app/
Closes #1981
Closes #2052
This PR attempts to:
restructure the docs into the sections mentioned in #2052
change content in most existing pages
add many new pages
change all of the existing media to adhere to a 1440x796 DPR 2.0 resolution
add an option to view images in full size.
Remove unused media files
Move the landing page content into the data folder
Deviations from the structure in #2052 have been made in two places:
"Deploy to Render" has been placed in Tutorials
A section on "Display mode" has been added to Concepts
The image open interaction on https://www.lennysnewsletter.com/p/how-figma-builds-product feels quite nice, with the complementary hover icon on the corner:
I can't preview / zoom into images in the preview
There seems to be no footer at the bottom of the docs pages, or even any padding at all? We used to have a footer there.
This should finally be resolved with the current version of the deploy preview
Maybe our tutorials section will be mixing up too many unrelated topics. Could we somehow have a Deployment section somewhere with subsections on different services we can deploy to?
We could list each Tutorial page in the main "Deployment" page, which is this one https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/deployment/ ?
Not sure what we agreed on about "display mode" - maybe it could be under "Embedding a Toolpad page?"
I was going by the thumb rule that we create a page in the Concepts section for anything that has its own configuration, while the guides are to explain how to use existing configurations to accomplish specific tasks. Which is why I thought it made sense to have a "display mode" page which explains both the modes, as well as a guide that explains how to use the standalone mode to accomplish iframing a Toolpad page.
This should finally be resolved with the current version of the deploy preview
Yep, seems great now. I like the way the zooming works, simple and straightforward, of course it could have more features like the example from Olivier but I feel like this works for now.
We could list each Tutorial page in the main "Deployment" page, which is this one https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/deployment/ ?
Yeah, i guess we should have a list of links from that deployment page to the tutorial for each service.
I was going by the thumb rule that we create a page in the Concepts section for anything that has its own configuration
That's right, the display mode can also be seen as a broader concept, and eventually we might have more use cases... On second though I think it works to have it under Concepts.
Looking good, I noticed a couple more things:
We can improve this later but maybe we could find a way to make the previews even bigger as long as it's still easy to go back to the normal view
The video here is really blurry and small https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/display-mode/. And the preview is even smaller than the outside the preview.
Looking good, I noticed a couple more things:
The video here is really blurry and small https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/display-mode/. And the preview is even smaller than the outside the preview.
Good catch on the video, I'll fix this.
We can improve this later but maybe we could find a way to make the previews even bigger as long as it's still easy to go back to the normal view (no need to do it now though imo, not even sure we should do it, just a thought)
I think we can do this now, I agree that the full-size images can be bigger. I can do this along with the dedicated zoom button appearing on hover that Olivier suggested.
Yeah, i guess we should have a list of links from that deployment page to the tutorial for each service.
https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/deployment/ does it work to have the link to the Render tutorial as an information card at the end for now, or should it be a different section?
Yeah, i guess we should have a list of links from that deployment page to the tutorial for each service.
https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/deployment/ does it work to have the link to the Render tutorial as an information card at the end for now, or should it be a different section?
Oh I see, it might work for now, but I think it would be better to have a list of items where each item would have the name of the service and then the link to the respective tutorial.
If the Render tutorial includes more general information that doesn't just apply to deploying to Render, then maybe that information should be in this page instead.
https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/concepts/deployment/ does it work to have the link to the Render tutorial as an information card at the end for now, or should it be a different section?
Oh I see, it might work for now, but I think it would be better to have a list of items where each item would have the name of the service and then the link to the respective tutorial. If the Render tutorial includes more general information that doesn't just apply to deploying to Render, then maybe that information should be in this page instead.
Agree, I'll modify the page
Oh I see, it might work for now, but I think it would be better to have a list of items where each item would have the name of the service and then the link to the respective tutorial. If the Render tutorial includes more general information that doesn't just apply to deploying to Render, then maybe that information should be in this page instead.
A list of this sort for now seems okay to me:
A list of this sort for now seems okay to me:
Looks good!
https://deploy-preview-2082--mui-toolpad-docs.netlify.app/toolpad/how-to-guides/customize-datagrid/
You're right, I'll do another pass on all the images to fix the aspect ratio/blurry GIF issues
Shall we center the images in the page?
I've centred the uncentered images and replaced all blurred GIFs
|
GITHUB_ARCHIVE
|
Project using two different versions of SQL Server
I am a junior developer and about to get my feet wet in my first "real" project. However we are trying to figure out a way to set everything up as the current developer lives out of country.
I was told to install certain programs, subversion clients and SQL Server 2000.
It does not seem that SQL Server 2000 can be installed on Windows 7. Are there inherent issues with me developing in a higher version of SQL Server like 2005? Is there an issue with stored procedures that can not be properly translated from on SQL Server version to another?
Again, I'm fairly new at this; please let me know if this is just a bad idea, impossible and any other guidance you can provide.
you might want to think about upgrading production from MSSQL2000 to something newer that 2000. It is somewhat outdated at this point.
I agree, but it's out of my hands at this point. I'm kind of in a pinch really trying to get my feet wet and this company just want's to drag theirs.
There are many features in newer versions of MSSQL that were not there in 2000 (multi-row inserts, newer hashing algorithms, and VARCHAR(MAX) to name a few). If you're using SQL Server Management Studio, it will not check these differences for you, even if you are connected to a SQL server 2000 database - it automatically uses 2008 rules for its syntax highlighting. Because of this it's easy to accidentally write code that's not 2000-compatible.
As far as getting 2000 running, if you have the install disk for an older version of windows, you could run a VM (http://www.microsoft.com/windows/virtual-pc/) and install the database server there. If your company has a separate development environment, you could create a copy of the production database to work off of as well.
The newer versions of SQL server bring new language and database features, if you write something using a feature that is available in SQL 2005 and not 2000 i.e. PIVOT then when you try and promote this to live then it will just get a syntax error.
There is no translation, if you went back in time 11 years, you'd still speak English you'd just get an odd look if you talked about 'Tweeting'.
You can set the database compatibility level to an earlier version for the specific database you are working on. This will stop you using the more modern features.
For the most part, you won't run into issues is you're simply running standard stored procedures and SQL statements.
However, there are several things that might not work properly if you're not in sync. SQL2005 was a relatively major upgrade and instroduced quite a bit of new functionality.
I don't know what you've got for available resources - dollars, etc, but if you have an MSDN Subscription at a level that provides you access to operating systems, then I would strongly recommend setting up a virual machine with an older version of Windows using your MSDN licenses, just to keep things on an even playing field.
|
STACK_EXCHANGE
|
Is it possible to add Product to Magento cart from another website without product being in magento store
I just setup a magento store (1.8) and added some demo products, but now my company wants to use the payment gateway built into magento to process payments from another website.
To explain better, my company runs a school and has a website with an application that generates bills for parents to pay. These bills vary depending on class of student and other factors. So Parent A may pay $200 while parent B pays $400.
What we want to do is post the amounts to be paid from the other website to our magento store and add it to cart for payment using our payment processor so that the magento features can still be used to log all transactions.
Since I am new to magento, I don't know if it is possible to do this and how if possible.
We intend to get the other application to generate the SKU, Product ID and Description automatically and post it together with the amount to the store.
So how do I get the magento store to receive this url and process it?
You must need to create product in magento.
When you post the data from other site, pass the product sku.
Then load product by sku :-
https://magento.stackexchange.com/questions/18421/why-cant-i-load-a-product-by-sku
Then check magento add to cart functionality how to add cart.
http://stackoverflow.com/questions/23264433/magento-add-product-to-cart-from-a-external-file-doesnt-work
I just got an idea and I am in the process of testing it out. I will create a php file say createnewproduct.php to receive the parameters from the url and then insert them into the magento database (thereby creating a new product). The qty of the product will be set as one such that after payment it becomes zero in the store, after creating the product the created product will then be automatically added to cart and proceed to checkout. I am of the opinion this method may work as far as my script can connect to the database and insert the values to create the product. What do you think?
Why not you can create all products into magento. So we no need to create every time. Just need to do add to cart.
In my case it is not possible to create school fees, entrance exam fees, etc as a product because the amts vary from student to student. Student A may have paid part of the tuition earlier while B is owing a full term fees. (parents also choose what books and other items to purchase and the system then sums all) The system generates a total amount based on what the student owes for the term and posts it to the magento store for payment. So it would have been easy were the amounts uniform and all fees paid at once. This is a unique system and we are trying to adapt the system to our requirement
|
STACK_EXCHANGE
|
The Admin Guide comes right out and says it:
Easier written than fully understood.
OpenDJ has many configuration options, only a few of which are accessible through the OpenDJ control panel.* Most configuration procedures involve use of the dsconfig command.
The dsconfig command has many options. Starting the command interactively with OpenDJ 2.5.0-EXPRESS1 shows a menu that nearly scrolls off a 80×24 terminal:
1) Access Control Handler 21) Log Publisher 2) Access Log Filtering Criteria 22) Log Retention Policy 3) Account Status Notification Handler 23) Log Rotation Policy 4) Administration Connector 24) Matching Rule 5) Alert Handler 25) Monitor Provider 6) Attribute Syntax 26) Password Generator 7) Backend 27) Password Policy 8) Certificate Mapper 28) Password Storage Scheme 9) Connection Handler 29) Password Validator 10) Crypto Manager 30) Plugin 11) Debug Target 31) Plugin Root 12) Entry Cache 32) Replication Domain 13) Extended Operation Handler 33) Replication Server 14) External Changelog Domain 34) Root DN 15) Global Configuration 35) Root DSE Backend 16) Group Implementation 36) SASL Mechanism Handler 17) Identity Mapper 37) Synchronization Provider 18) Key Manager Provider 38) Trust Manager Provider 19) Local DB Index 39) Virtual Attribute 20) Local DB VLV Index 40) Work Queue q) quit Enter choice:
Suppose you arrive at this menu thinking, “I want to lock users out for 5 minutes if they get their password wrong 3 times in a row.” You scan the list of options. You quit and try
`/path/to/OpenDJ/bin/dsconfig --help-all | grep -i lockout`, but come up empty. You ask a colleague who has no idea. You almost search for “opendj account lockout” and find it in the Admin Guide, but then you decide that you do not want to have to rely on finding something in the Admin Guide. Surely the Admin Guide will never cover everything you plan to do with OpenDJ. So you want to figure out how to use the reference documentation.
As the Admin Guide states, there are two parts** to the configuration reference documentation:
- The dsconfig reference
This covers dsconfig and all its many subcommands and options. Everything is also available through the dsconfig help built into the command, the advantage of the reference being that you can search through everything at once.
- The OpenDJ configuration reference
This covers all the individual configuration properties you can set with dsconfig, and also shows you how the configuration properties are attached to configuration objects, plus the configuration object inheritance. You need to know inheritance because dsconfig is arranged by kinds of objects. Some objects are abstract parents of the configuration objects you create.
You open the OpenDJ configuration reference to the default page, where the left frame shows Inheritance, and you search for “account”. This turns up account status notification handler configuration objects. You search for “lockout”. Nothing. You think, “Okay, where’s the alphabetical list of everything I can configure?” You find it under the Properties tab in the left frame, and you search again for “lockout”. Now you are getting somewhere:
lockout-duration look promising. Perhaps you can set
lockout-failure-count to 3 and
lockout-duration to 5m. There’s also a
lockout-failure-expiration-interval that might be useful to avoid locking users out if consecutive failures happened over hours or days rather than all in a row. You notice that these properties are configured on
Password Policy configuration objects.
You could click the links and read more, but instead you go back to the interactive dsconfig session, and you choose
27) Password Policy. From there, the menu-driven interaction makes it relatively easy to discover and then change the settings.
And thus you are on your way to becoming a dsconfig guru. (After you get the hang of it, read about the options
--advanced, and especially
--batchFilePath in the dsconfig command reference so that you can really do everything including generate scripts from your interactive sessions that you can use again later for tasks you repeat.)
* It’s not quite strictly true that you cannot configure more of OpenDJ through the control panel. If you Manage Entries > Base DN > cn=config, you can hack the config. Realize that you are accessing a private interface in that case, however. What you are doing is similar to editing
OpenDJ/config/config.ldif directly. Mistakes can break your server.
** Someday, there might be one part. See OPENDJ-386.
|
OPCFW_CODE
|
Essential Reasons Why Flutter Is Ideal for Mobile App Development In 2020
by Mashum Mollah Technology 22 July 2020
Many people who are confused about Flutter and what it represents fail to understand how big it is. Let us get the basics out of the way. Flutter is a simple UI framework, which saw the light of day during Google’s I/O event in 2017. The main purpose of Flutter is to help promote cross-platform app development.
For any new developer looking to build apps, Flutter is one of the leading players in the industry. For developers, Flutter represents a simple and uncomplicated toolkit used for developing app UIs. Popular brands such as Alibaba Group, Google ads, Philips, EMAAR, Hamilton, Grab are using Flutter to develop several engaging UIs.
Flutter: A Brief Introduction
Flutter is open-source. That is part of its charm. It helps you create an exciting mobile app UIs free. A structure with the UI library is a collection of user interface components, which are reusable in nature and that help build an app in less market time.
Here are some of the reasons to use flutter for mobile app development. The software development kit has a tool collection that helps in developing applications.
1. App development on a cross-platform level-
The main advantage is that it allows the one who is coding the app (aka the developer) to write the code once and use it on multiple occasions. What is more is that you can use the same code language or base, for both the android as well as the iOS platforms.
2. Fast market time
Hot loading is an advantage of flutter that helps fasten the time taken to develop the app. You do not need to do everything and then see how the app comes out. Instead, developers use emulators provided by Flutter to see the changes in real-time. Flutter saves 15% of testing time, as the system tested for one platform does not require it for other platforms.
3. Builds versatile apps
Flutter is highly customizable and can help you create multiple versions of the same thing. One of the best features of Flutter as a UI developer platform for apps is its host of widgets and customization options. No matter what industry you are looking to develop an app for, Flutter will be able to help you in a major way. Whether it is a ride-sharing app, or a food delivery app, or even an eCommerce app, you can get separate widgets to help you develop the perfect app.
4. Higher performing apps
Flutter uses dart programming to help speed up the UI. This helps in reducing the time taken by the app to launch or switch between the different pages on the interface. This is also very useful when you want to add animation and other transitions to the app to make it visually appealing and attractive in nature. In very simple words, if you are building an app using Flutter, you can rest assured that it will have greater performance.
5. Better than the competition
Choosing flutter is the quickest way to deliver a well-performing cross-platform mobile application. In the near future, flutter will become an ultimate cross-platform UI framework.
|
OPCFW_CODE
|
You searched for the word(s): 5
Thanks for the analysis.
What I'd like is for these constraints to be built into the model as constraints without having to build a matrix of each claimant[claimantid], [fedagency country code], [org country code], [claimant country code], [cost of living]1, null, af, us, 41.502, null, us, af, 55.003, null, fr, us, 45.504, us, us, af, ...
This is, if I'm not mistaken, a DDL script based on the ORM diagram I posted. I was looking, rather, for some indication of what is the schema that you're trying to write your constraint for, in SQL.
But maybe I should make sure of this question first: Are you planning to re-do a current relational schema (to make it ...
Hi Andy et al,
I think this is what you wanted?
CREATE SCHEMA "Exists"
CREATE TABLE "Exists".Claimant
claimantId INTEGER IDENTITY (1, 1) NOT NULL,
federalAgencyId INTEGER IDENTITY (1, 1),
homeCountry NATIONAL ...
I have responded in-line, like
Yes that seems to reflect accurately the way I currently want to design the database, thanks.
>> Good, now we can go ahead and address finer details (e.g. see below).
I'm not sure how you added those light blue ...
Thanks. I'm trying to use various identifiers to clearly define my schema.
As far as this intermediate table it is an agreement that is pre-determined, but the order of presidence dictates that the existence of certain pieces takes presidence over other pieces regardless of wether they exist or not. So I'm tryingt ...
Yes that seems to reflect accurately the way I currently want to design the database, thanks. The current model can have all three cases b/c the more info the better as well as nulls, I only put the exclusive or b/c I wanted to relay the importance of the existence of the first case as the most important, then the next importance ...
Does the below seem to reflect accurately the fact types and business logic in view in the domain (of your "watered-down example")?
I gather that the actual, existing database has a "CostOfLiving" table. This seems like something that needs correcting, rather than reflecting accurately the fact types of ...
It seems to me that it might be easier for you to define some atomic facts and then expand as required. The following model illustrates this approach.
Also, it's not good practice to use ".id" as a reference mode for every object. My model shows some alternatives.
Note: Click on the diagram to see the right hand side.
I forgot to add the country to the Cost of Living table. But this is basically what I'm trying to do find the cost of living based the country code. The Federal Agency takes, presidence, then comes the Public Organization(charity) then the Claimant's Country code is last. I'm using this as a watered down ...
Your description is not very clear. I can think of a couple of different things you might mean though, so I'll try to answer based on that. If I'm mistaken, please clarify and we'll have another go.
One possibility is that you have a record which has a single country code, but the source or provenance or authority which ...
|
OPCFW_CODE
|
EraSearch has role-based access control (RBAC) to let you manage users, roles, and permissions. This page gives a high-level overview of EraSearch's RBAC approach.
The content below is intended for self-hosted EraSearch users looking for conceptual information about EraSearch RBAC. If you're ready to set up and start working with RBAC, visit Setting up RBAC.
In EraSearch's RBAC approach:
- Actors are assigned to roles
- Roles have one or more permissions
- Actors gain permissions by being part of roles
The diagram below shows an example of how EraSearch RBAC works in practice.
In this example, there are two actors: a user and an API key. The user has two roles: Admin user and Limited writer. Through those roles, the user has the permissions to manage security across the database and write data to specific indexes.
The API key has one role – Limited writer – which lets the tool or agent using the API key write to specific indexes.
Now that you have a high-level view, here are some more formal definitions of EraSearch's RBAC terms.
There are two kinds of actors in EraSearch: users and API keys. An RBAC user is someone whose identity has been authenticated by a third party. In EraSearch, users can have zero or more roles.
Tools and agents that cannot prove their identity (for example, Telegraf and Logstash) use API keys to work with EraSearch. API keys can have zero or one role.
A permission is something you can do in EraSearch, and it's defined by its resource, action, and scope.
EraSearch uses this syntax to express permissions:
Resource is the level at which a permission acts. EraSearch has two resources: index and database. Actors with index resource permissions can do things in one or more specific indexes. Actors with database resource permissions can do things impacting the entire database.
Actions map to specific endpoints in the EraSearch API.
For the index resource, the available actions are
For the database resource, the available actions are
manage security and
Scopes are for index resource permissions only, and they limit where actors can do things. For example:
- This permission lets actors write to all indexes:
- This permission lets actors write to indexes starting with finance-:
A role has one permission or a set of permissions, and roles are assigned to actors.
manage security permissions can create custom roles with one or more permissions.
How permissions map to endpointsCopyCopied!
The table below lists permissions and how they map to EraSearch's API endpoints:
To get started with EraSearch RBAC, visit Setting up RBAC, Using RBAC with Grafana and Azure AD, and Giving RBAC write permissions to tools. For more background information on EraSearch's RBAC approach, visit these articles:
|
OPCFW_CODE
|
using System;
using DotNetHelper.FastMember.Extension.Models;
using DotNetHelper.ObjectToSql.Enum;
namespace DotNetHelper.ObjectToSql.Helper
{
internal static class ExceptionHelper
{
public static string MissingKeyMessage { get; } =
"Can't build delete or Update statement without specifying the key properties." +
$"{Environment.NewLine} You can use either or the following attributes [SqlColumn(SetPrimaryKey = true)] OR [Key]" +
$"{Environment.NewLine} For Identity properties use [DatabaseGenerated(DatabaseGeneratedOption.Identity)] OR [SqlColumn(SetIsIdentityKey = true)] )";
public static string MissingKeyMessageForDataTable { get; } =
"This dataTable doesn't have any columns set as primary keys. Therefore no update statement could be created";
public static string MissingIdentityKeyMessage(Type type)
{
return $"Can't build query with output because the type {type.FullName} isn't marked with any Identity Keys attribute. You can either execute an overload method that allows you to override which fields to " +
$"treat as identity keys or apply Identity field attribute [DatabaseGenerated(DatabaseGeneratedOption.Identity)] OR [SqlColumn(SetIsIdentityKey = true)]";
}
public static string NullSerializer(MemberWrapper member, SerializableType type)
{
return $"The property {member.Name} is marked with the Serializable attribute of type {type} but no implementation of a Serializer was provided";
}
public static string InvalidOperation_Overload_Doesnt_Support_ActionType_For_Type(ActionType actionType, string typeName)
{
return $"This overload doesn't support the type '{typeName}' for the action type {actionType}. " +
$"{Environment.NewLine} Please use the overload string BuildQuery<T>(ActionType actionType, T instance, List<RunTimeAttributeMap> runTimeAttributes) where T : class " +
$"{Environment.NewLine}";
}
}
}
|
STACK_EDU
|
In This Chapter
How Would You Do It?
Comparing the Simple Sorts
As soon as you create a significant database, you'll probably think of reasons to sort it in various ways. You need to arrange names in alphabetical order, students by grade, customers by ZIP code, home sales by price, cities in order of increasing population, countries by GNP, stars by magnitude, and so on.
Sorting data may also be a preliminary step to searching it. As we saw in Chapter 2, "Arrays," a binary search, which can be applied only to sorted data, is much faster than a linear search.
Because sorting is so important and potentially so time-consuming, it has been the subject of extensive research in computer science, and some very sophisticated methods have been developed. In this chapter we'll look at three of the simpler algorithms: the bubble sort, the selection sort, and the insertion sort. Each is demonstrated with its own Workshop applet. In Chapter 7, "Advanced Sorting," we'll look at more sophisticated approaches: Shellsort and quicksort.
The techniques described in this chapter, while unsophisticated and comparatively slow, are nevertheless worth examining. Besides being easier to understand, they are actually better in some circumstances than the more sophisticated algorithms. The insertion sort, for example, is preferable to quicksort for small files and for almost-sorted files. In fact, an insertion sort is commonly used as a part of a quicksort implementation.
The example programs in this chapter build on the array classes we developed in the preceding chapter. The sorting algorithms are implemented as methods of similar array classes.
Be sure to try out the Workshop applets included in this chapter. They are more effective in explaining how the sorting algorithms work than prose and static pictures could ever be.
How Would You Do It?
Imagine that your kids-league baseball team (mentioned in Chapter 1, "Overview") is lined up on the field, as shown in Figure 3.1. The regulation nine players, plus an extra, have shown up for practice. You want to arrange the players in order of increasing height (with the shortest player on the left) for the team picture. How would you go about this sorting process?
FIGURE 3.1 The unordered baseball team.
As a human being, you have advantages over a computer program. You can see all the kids at once, and you can pick out the tallest kid almost instantly. You don't need to laboriously measure and compare everyone. Also, the kids don't need to occupy particular places. They can jostle each other, push each other a little to make room, and stand behind or in front of each other. After some ad hoc rearranging, you would have no trouble in lining up all the kids, as shown in Figure 3.2.
FIGURE 3.2 The ordered baseball team.
A computer program isn't able to glance over the data in this way. It can compare only two players at one time because that's how the comparison operators work. This tunnel vision on the part of algorithms will be a recurring theme. Things may seem simple to us humans, but the algorithm can't see the big picture and must, therefore, concentrate on the details and follow some simple rules.
The three algorithms in this chapter all involve two steps, executed over and over until the data is sorted:
Compare two items.
Swap two items, or copy one item.
However, each algorithm handles the details in a different way.
|
OPCFW_CODE
|
The Week 5
This week I believe it’s a diving.
I can now see many things, subtle and seemingly extending to somewhere abyssal.
In this week, I have tried transforming the natural language questions into SPARQL queries via the trained neural SPARQL machine models, which showed the restrictedness in the vocabulary mapping.
To trace back, I noticed that the Spotlight annotation might focus on the entity recognition based on the word-wise input. Thus, the entity consisting of more than one word could have been missed.
Also, to make better use of the efforts of the existing templates, it is more efficient to vectorize the new question templates generated in the previous work in order to do the comparison with the existing question templates’ embeddings.
What’s more, with the helpful advice, the project is moving forward an insightful and pioneering direction.
1. Issues Analyses
1.1 The Mismatching While Trained Models Work With Unprecedented Vocabulary
Case 1: Entities Ambiguation
For example, when the NSpM model took an input natural question that contains "region", the model referred it to the "dbr:wineRegion" that it had only learned about the word similar to "region", which might be biased and divergent to the original sense.
Case 2: The Abused Mapping
That's to say, when the model handles a vocabulary that is not in the topic that it was trained, it would mistakenly place the unprecedented word into the place of another vocabulary that it had learned, then mismatching to the value of the replaced vocabulary.
- For example,
if we use the model trained on dbo:Monument templates set to infer the questions of the topic about location that contains the vocabulary that it had not learned in the training data, like: the “rdf:type dbo:Place” and the “dbo:location” are the two of the most frequently abused in the inference for those vocabulary unprecedented.
Case 3: Issue On Reproducibility
The models that have attained BLEU could have not perfectly translated the natural language questions with a new different vocabulary into the correct query.
- For example,
I picked out some of the questions in the training set as input to the trained model to test whether it can reproduce the queries that are paired to these original question in the training set, like:
“whom did xu fan marry?”,
the generated result was
“select distinct var_uri where brack_open dbr_Rugrats dbo_composer var_uri”,
where apparently the “dbr_Rugrats” and “dbo_composer” are not related to the question, also, the crucial entity about the person named “Xu Fan” and the entity “dbr_Xu_Fan” has not been properly recognized, not matching to the correct paired query
“select distinct var_uri where brack_open var_uri dbo_spouse dbr_Xu_Fan sep_dot brack_close” in the training set.
The current neural SPARQL machine would be having a strong dependency on the vocabulary that it have learned. Also, the present version of the model design has mostly focused on the fitting of neural machine translation between the natural language questions and the SPARQL queries.
1.2 Expanding The Question:
So, what should we do next?
I got some feedback from the researches in NL2SQL that major in deploying reinforcement learning method in training the neural model to get more and more correctness in the returned answers.
Figure 1, SEQ2SQL(V Zhong et al.)
To elaborate on this direction of researches, the paper MAPO (Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing) on NIPS 2018 is worthy of our attention.
MAPO is a method based on weakly supervised and intensive learning. In the paper, it transforms the NL2SQL task into an intensive learning task based on the basic composition of the NL2SQL task and the basic elements of reinforcement learning. In MAPO, the state of reinforcement learning x is seen as the natural language problem of input and its corresponding environment (e.g. an interpreter, or an knowledge graph or, typically in the experiment in the paper, a database), and the action space A of reinforcement learning is regarded as all possible collections of programs under the current natural language problem. And each action sequence a of the enhanced learning trajectory corresponds to every possible program.
It can be seen that the key to the algorithm is the training of the strategy function. In MAPO, the author uses a seq2seq model to fit the strategy function, and training for the strategy function is equivalent to training the seq2seq model.
It is worth noting that in reinforcement learning, the parameter update of the strategy function is different from the deep neural network, not based on the loss function, but based on the expected return. The parameter update is in the direction of maximizing the expected return. The expected return can be given by the Reward function.
Since our task is to generate program statements, we can easily run the generated program in the real environment and compare the result with the label of the weakly supervised training data to get the 0-1 binary return function. Its core idea is to correct the behavior of the agent by interacting with the environment, so as to achieve the effect of “learning”.
Based on the above-mentioned intensive learning ideas, in the specific implementation, MAPO proposed the following innovative solutions. First, in order to improve the efficiency of training, the MAPO algorithm stores the sampled high-reward program into a Memory Buffer. When training the strategy function fitting network, the objective function that maximizes the expected return is divided into two parts, one is The expected return of the sampling program in the Memory Buffer, and the other part is the expected return of the sampling program outside the Buffer, as shown in the following figure.
Figure 2, MAPO(NIPS 2018)
The team applied MAPO to the WikiSQL dataset. For each sample on the WikiSQL data, 1000 sample programs were generated. Five high-return samples were stored in the Memory Buffer. The GloVe embeddings was applied to the strategy function fitting network. LSTM Hidden The unit is 200 and the training is 15000 steps:
The learning of the connection between the the natural language question and the structured query is the major objective function, where the task is designed to let the query fit as much as possible during the machine translation training.
While parallelly, the researches in NL2SQL have a comparative stronger emphasis on the result that the generated query would return and on the correctness that it would produce, because the correct return of the query is their only goal, not the matching similarity of the queries.
That’s to say, their models tend to build a more direct function from the natural language question through the generation of the structured query finally towards the return answer, based on which this type of algorithms of reinforcement learning get the reward upon the right or wrong of the result of the generated query then learning to let the query generator get the more and more pertinent returned answer.
To sum up, the machine translation based-model is query-driven while the reinforcement learning model that tries to take a step further can be classified as a final-answer-driven method.
However, in this type of final-answer-driven models, there’s a pre-requisite that requires to embed the natural language questions, the structured query, and the answers entities into a calculable form while working with our problem.
2. Vectors Embedding
Then, why embed the data?
The problem of the restricted limitation of the vocabulary mapping, and the abused vocabulary matching, in my humble opinion, could be traced back to the restriction of the vocabulary that it was able to learning in the training data set.
So, how could we remove the limitation away from the current model?
There are two aspects to be paid attention:
1) Uniqueness and Directivity: The embedding vector must be unique in terms of representing the entities, otherwise there could be conflicting mapping. And it must be promising that, in the vector space, each entity-vector pair must be correctly matched in one unique key-value pair.
- For example, the vocabulary “brack_close” was duplicated in the generated data file in the training data set on the topic about “place_v2”.
2) Comprehensive Inclusion: The vector set should, comprehensively and accurately, comprise all the entities and relation properties in the DBpedia space, with inclusion in the embedding of the keywords of the query grammars, e.g. “SELECT”,”DISTINCT”,”WHERE”.etc.
After the attempts in training a new vector set on the given templates, I figured out that it might be of more efficiency employing the DBpedia embedding vector of the previous projects.
Victor Zhong, Caiming Xiong, Richard Socher (2017) SEQ2SQL: GENERATING STRUCTURED QUERIES FROM NATURAL LANGUAGE USING REINFORCEMENT LEARNING
|
OPCFW_CODE
|
Everyone Focuses On Instead, Kernel density estimation’s first step looks for two “hidden” bits. The first is that there’s no code in question, “a number” (or bytes depending on the source code). This means we can simply look for the current size of the number in bytes. For larger things in memory, we could take the source code for a given size (this will result in a bit larger or smaller). (What happens when you combine two approaches into one?) We then look for bits of code that, when translated to their various sizes, cause the two approaches to synchronize, giving us code that’s quite similar to the size estimate we had yesterday (even though it only arrived at a different memory address).
How To Build Aoql and ati
To achieve this: int base64_t nMinSize; so that (for our purposes) we’re stuck with the same number, until we change “nMinSize” to (n->length – 0x000) (to tell Linux how much this is before adding some extra space at the end). We haven’t got this amount of code yet, but we may switch the source code in as the total count gets larger, until eventually we find something interesting, or perhaps both. In both cases, “NMinSize” will be where the number starts from. Because the speed of memory-storage systems can’t be measured, we’ll also build the same program using different techniques at check these guys out speed scales. Adding NBytes Each set of memory sizes has its own constant, a “non-zero” bit, that will be floating point (the “internal Bessel”).
Creative Ways to 2^n and 3^n factorial experiment
The smallest number there is exactly 32 bytes of data to keep track of. For every the kernel, there’s something called a “Bitstore”, and there are 12 (or 16, depending on which version you might use) “BITCHS”, that hold a constant constant whose value is either 1, 0, 100, or 700. “We can use some really poor languages”, but I’ll end by noting when there’s been any improvement I can notice, and by the time we’ve my blog it all out, we can keep counting things in their value! The next step is (relatively) trivial. (Nothing too particularly complicated for now, here) assume the program of this file has a small list of size options that aren’t guaranteed to achieve the requested performance. This includes, but is not limited to: The number of unallocation points (you don’t want to change how much of navigate to these guys data is going to be size-contiguous here, so we can forget how much we can read or write here.
How To Find Posterior probabilities
So instead, we need certain kinds of bitwise operations where you must use bitwise operations for pointers, so you always have n in memory — where the address starting at 0x800B is the address of the bytes, i.e. you want n=0, all the way up to the x86 variable in inetaddr. To satisfy this condition, the processor returns 0 for a null pointer, but (this is one you might find useful to write to this file out of choice), it actually makes the word “unallocated” a little different, and inversely so. There’s an escape (meaning you can write at least n!) saying to continue reading and writing until the end.
3 Tactics To Markov Analysis
This is a common condition, so when 1 is in memory, n*16
|
OPCFW_CODE
|
Time is a very scarce commodity for everybody so one should spend it very wisely. People should make several kinds of decisions so that time can be effectively utilized. A lot of people are confused between choosing the sleep or going for exercise, or one should hang out or should they stay at home and many more things.
But spending time in an online Hackathon is always an investment of time and no wastage because it helps to provide several kinds of benefits to the participants.
Some of these kinds of benefits have been mentioned as follows:
-It is a great way of learning a new kind of technical skill: A lot of people go with the option of investing their money into the traditional form of education but the unfortunate part is that that type of education does not allows learning a new kind of technical skill. But one can expect this thing from a Hackathon so that one can learn a new skill.
So, the organizers help to arrange these kinds of events around a particular formulation of technology and also enable the environment to develop various kinds of tools and workshops so that people get new skills and unleash their creativity to solve a particular type of problem.
Another added advantage of this thing is that organizers help the people and motivate them by cheering them throughout the whole process.
-It is also a great way of creating a sense of accomplishment: Participating in the hackathons will help in adding knowledge element of the people and will also help to provide various kinds of tangible benefits to them. It will also give a sense of accomplishment if the project will be successfully completed.
The sense of accomplishment in any of the fields of whole life is priceless. So, participating in a Hackathon is a great experience and helps to provide a great amount of satisfaction and self-confidence.
-It is a great way of enhancing the soft skills of individuals: With the help of participating in hackathons, one can very successfully learn new technical and soft skills. Soft skills are very well required at the time of formulating several teams and enabling coordination between them. The Hackathon environment is considered a perfect place to foster these kinds of skills.
In a short span of time, one can learn to deal under pressure with the team of strangers which is great learning in the overall process. One will have a proper idea about his or her strengths and weaknesses and the ability to work in a unified kind of project. Apart from communication skills, there are several other processes that help to enhance the overall personality of the people.
-It is a great way of adding things to the resume: At the time of conducting interviews and scanning the resume, the employers always look for work experience and participation into several events other than job history other than just education. So, participating in these kinds of experiences is a great way of building the persona and the resume.
With the help of these kinds of things, one will create a great image in the minds of employers and will be a person who takes initiative always. An individual will become a great leader and will enjoy being challenged at each point of time in life.
Hence, these kinds of skills are considered to be an asset for any of the employers and will help in differentiating one from other people who are there for interviews.
-It is a great way of building networks: The process of networking is considered to be the most important component of the Hackathon. One will be always surrounded by like-minded people who will be always there for learning and collaborating things.
So, it is a great way of developing the networks by properly working with them under pressure and building a strong bond with them.
Aside from these kinds of things, one will also have a proper chance to meet the mentors and experts from the community and industry. Hence, one will also have the opportunity to meet corporate sponsors which is a great way of building the networks.
-It also helps in paving the path for start-ups: Participating and meeting people into several kinds of hackathons is considered to be a breeding ground for several kinds of start-ups.
At the time of attending the Hackathon one will have a proper platform to display the skills and ideas so that that particular idea can get a practical platform and can be converted into a state of the project on which one can start working in the years to come.
-It is a great way of getting inspiration: The positive energy from the hackathons is great and at the end of the event one will get a lot of inspiration and motivation from the overall process and event. One will have proper access to several kinds of ideas and goals so that one can launch a product in the coming years.
With the help of these things, one will also observe other participants and will approach to solve several kinds of problems with the most creative solutions. Hence, this event is a way of uniting the creativity and widening the inspiration and imagination element present in the individuals.
-It is a great way of giving back to the community and industry: At the time of participating in a hackathon, a team always appreciates the value of the community around them because it has been a community that has always supported them about a particular start-up. One will also have access to several kinds of people for whom one will be solving the problems and one will get proper access to the online resources with the help of the online community and can give back to them by doing the same and solving their problems.
Hence, the online Hackathon is considered to be a great way of getting relaxed from stress in life and enjoying each minute of being into it. Hence, it is a great combination of fun as well as skills that help in polishing the personality of individuals.
|
OPCFW_CODE
|
ConfidentialityTo protect this property, it is important to use encryption with secure key management. The common well-known and accepted encryption algorithm is AES256 and this should be the choice in all cases allowing for symmetric encryption until proven otherwise. For cases that involve many users / keys, use asymmetric RSA 2048 bit. For key exchanges over insecure channels, use the Diffie-Hellman protocol. Encoding (e.g. BASE64) is NOT encryption, as the decoding function can easily be used for reverse results.
IntegrityAlthough some may think encryption is enough, that is not necessarily the case. Imagine if someone has access to the encryption / decryption keys and can change the content of the message (the supervisory control protocol, the values, the parameters, the content – you name it) without the true recipient (the valve, the control, the actor, etc.) being “aware” of those modifications. This could become a big issue. Alternatively, imagine an accidental change of the encrypted message without knowing the actual keys. How can the recipient system make sure the received message / command has not been tampered with in any way? Using secure hashes (a cryptographic hash function like SHA-2, SHA-3) helps solve this. A CRC protects against only transmission errors and not intentional changes. So dear designers, please require the use of secure hashes or even better the use of digital signatures to attain non-repudiation (see below).
AvailabilityIt is important to think about the required availability of the system / data at hand. Therefore, a true and deep analysis of the use cases and a clear definition of backup solutions, high-availability (full double pathways) specifications, and potential 24/7 implications for patching and updates mechanisms need to be developed and specified / documented. Consider alternative power sources, network sources, storage sources, compute sources, etc. Remember to look at the whole meta-system, as each node adds another factor into the probability equation (80% * 80% * 80% ≡ 51.2% and not 80% for the total system). What kind of emergency response feedback should the system provide to add to the sustainability of the overall system?
OpennessWhat kind of open interfaces and ports are required, and what does this mean to the attack-surface? Remember, for example, the USB port / protocol. It was a wonderful improvement to the former adapters / busses / protocols / (E)ISA / ATAPI / SCSI slots etc., but at what price? You can take over a machine by connecting a malicious USB stick to it. Does that attack vector work for your product, too? And regarding the open (easy accessible) standard: this is basically interoperability versus perceived security (by obscurity – which doesn’t work long). It’s similar to the publicly discussed encryption algorithms (not the keys itself though!). When anyone (capable) can verify their design, this is better than some implementation that is kept secret in the dark because it suffers from certain issues. Is your system secured by design, or have you outsourced this responsibility to others at a later stage? (Guess what, it might never be addressed just by experience from the past.) Resilient system design requires fail secure, not fail open. But keep in mind that the overall system must be safe for humans. Due to length constraints, I have cut this article into a little series. In the next piece of it, I will discuss several additional mandatory security design considerations for the IoT / IoE world, such as secure system & SDLC, the 4 A’s, as well as non-repudiation and others.
|
OPCFW_CODE
|
M: 45 year CPU evolution - one law and two equations - godelmachine
https://arxiv.org/abs/1803.00254
R: synctext
Paper highlights:
\- 22mn or 7nm are idle marketing. "The nodes were first defined according to
the transistor channel length. The last nodes are more defined according to
marketing criteria."
\- the memory wall... "The huge difference between CPU and DRAM growth rates
led to the increased complexity of microprocessor memory hierarchies.
Different levels of caches are needed to balance the differences in the
bandwidth and latency needs of the CPU with those of the DRAM main memory."
\- Conclusion "When the end of the predicted end of the exponential evolution
will be real or when non-Von Neumann architectures will prove to be more
efficient for programmable applications, the situation will be totally
different. Until that point, the two equations that have been discussed in
this paper will be there to explain the evolution"
R: dragontamer
> \- the memory wall... "The huge difference between CPU and DRAM growth rates
> led to the increased complexity of microprocessor memory hierarchies.
> Different levels of caches are needed to balance the differences in the
> bandwidth and latency needs of the CPU with those of the DRAM main memory."
The way to "solve" this is well known. You build buffers between the CPU and
the Memory, and then execute as much as possible out-of-order. And that's the
purpose of L1, L2, and L3 caches: to provide this out-of-order code enough
data to work (while the CPU is waiting on slow, slow main-memory).
However, contemporary RAM is not designed to coordinate with the CPU very well
on this front, at least compared to designs like HMC (a type of "stacked RAM",
a competitor to a GPU's HBM).
HMC is interesting because its a packet-based system. You tell the RAM a
memory-address to access, but then the RAM may return that memory out-of-order
compared to other requests !!!
After all: RAM also has physical "latency" issues. When a bank is open, it
needs to close, incurring a tRAS, tCAS, and tRC delay. (A "hit" to the same
bank would incur only a tCAS delay). If RAM could execute "out of order", then
the memory-controller could allow more efficient orderings of memory.
Juggling which banks are open and closed has been traditionally a CPU's memory
controller job. But communicating this information over a many-cm long PCB
trace incurs latency. So it makes more sense to put this logic as close to the
RAM as possible (speed of light, capacitance, inductance, etc. etc. slow down
the signal).
Furthermore: since CPUs are incredible out-of-order machines already, it
wouldn't be a major hassle to make the memory also out-of-order. I mean, yeah,
its complicated, but its no different to what CPUs already do with L1, L2, and
L3 caches (and the programming constructs built on top: Atomic Accesses and
whatnot to control the effects of memory-reorderings).
I haven't heard of any major computer use HMC yet however. Only $10,000+ FPGAs
on occasion. Still, there exists RAM today which can execute requests out-of-
order. When this RAM becomes mainstream (if it ever becomes mainstream), I'd
expect systems to be able to get much faster.
R: slededit
FWIW DDR3 includes 8 banks each of which can execute independently of the
others. So there is still parallelism there. DDR4 doubles this to 16.
R: dragontamer
> FWIW DDR3 includes 8 banks each of which can execute independently of the
> others. So there is still parallelism there. DDR4 doubles this to 16.
Indeed. So there's some parallelism that can be captured, but they're managed
by the CPU instead of the Memory.
Because HMC abstracts away the protocol into a packet-based innately out-of-
order protocol, and because the "memory controller logic" has been moved to an
area physically as-close-as-possible to the RAM itself, it can achieve much
higher degrees of parallelism.
Case in point: a single stack of HMC has 128 banks of parallelism across
4-vaults (I would argue that a "vault" is roughly equivalent to a DDR4
channel): [https://www.micron.com/parts/hybrid-memory-cube/hmc-
sr/mt43a...](https://www.micron.com/parts/hybrid-memory-cube/hmc-
sr/mt43a4g40200nfa-s15?pc={8AD36F73-07F4-4ECD-A168-B5E899F1E650})
R: slededit
The bottleneck for DDR memory isn't really the memory controller but rather
the memory bus itself. Moving the controller won't change that.
Its very rare to get anywhere close to 100% bus utilization. HMC's real
advance is switching to independent serial links which we know how to scale
better than parallel busses. Its this improved bandwidth that allows you to
pack in more parallelism. We _could_ have more banks in traditional DDR memory
but there isn't much point because the data bus couldn't feed it.
|
HACKER_NEWS
|
Robots depend on maps to move around. Although they can use GPS, it is not enough when they are operating indoors. Another problem with GPS is that it is not accurate enough. Therefore, robots cannot depend on GPS. Therefore, these machines depend on Simultaneous Localization and Mapping, which is abbreviated to SLAM. Let’s find out more about this technology.
With the help of SLAM, different types of machines such as robots create maps as they move around. With these maps, they move around without crashing into different objects in a room. It may sound simple, but this process consists of multiple stages that involve sensor data alignment with the help of a number of algorithms. These algorithms use the power of the GPUs of today.
Sensor Data Alignment
Today’s computers consider the position of a robot as a timestamp dot on a timeline or a map. Besides, robots continue to collect data about their surroundings using these sensors. The interesting part is that camera images are captured 90 times per second for proper measurements. When robots move around, data points make it easier for the robot to prevent accidents.
Besides, wheel odometry considers the rotation of the wheels of the robot. The purpose is to help the robot measure its travel distance. Apart from this, they also use the inertial measurement units to estimate acceleration and speed.
Sensor Data Registration
Since data registration is done between two measurements on a map. Expert developers can easily localize a robot using scan-to-map matching.
GPUs that perform Split-Second Calculations
The speed of these mapping calculations is between 20 and 100 times per second. It all depends upon the algorithms. And the good thing is that these robots use powerful GPUs in order to perform these calculations.
Unlike a regular CPU, a powerful GPU is up to 20 times faster. Therefore, simultaneous localization and mapping use powerful graphics processing units.
Visual Odometry to help with Localization
The purpose of visual odometry is to recover the orientation and location of a robot. Powerful GPUs use two cameras that function in real-time to guide the location at a speed of 30 frames per second.
With the help of stereo visual odometry, robotic developers can figure out the location of a robot and use this for proper navigation. Besides, future developments in the world of visual odometry can help things make easier than before.
Map Building that helps with Localization
There are three different ways to create maps. In the first method, mapping algorithms work under the supervision of a supervisor. Therefore, the process is controlled manually. On the other hand, the second method involves the power of a workstation for this purpose.
In the third method, odometry data and lidar scan recordings can help make things easier. With this approach, the log mapping application can help do the mapping offline.
Long story short, hopefully, this article will help you improve your understanding of simultaneous localization and mapping.
|
OPCFW_CODE
|
Last week I passed the IBM XML certification test with an 80% score. I also took the CXE exam and passed with an 81% score. The XML help at JavaRanch was mostly responsible for my passing scores. To repay the favor, I will mention the study material that I considered most important to passing. The 3 books listed by IBM (Professional XML, Professional XML Schema and XSLT from O'Reilly) are very important. In addition, for XML syntax, The XML Specification Guide published by Wiley is essential. It goes through the specification line by line, explaining its meaning. MUCH better treatment than any of the other books, but covers only XML syntax. An excellent resource for XSLT is the Wrox book XSLT by Michael Kay. This goes into more detail than the O'Reilly book. Just watch out for some stuff that never made it into the standard. For SAX, the best treatment is the SAX2 book published by O'Reilly. I definitely would have missed one question on the exam if it had not been for this book. Beyond these resources, the best thing to do is to carefully study the specifications themselves. These can be found at www.w3.org/TR/. In particular, the specifications of XLink, XPointer, XSignature, DOM, XSL-FO (under XSL) and CSS2 should be carefully studied. Don't trust the book treatment on these topics. XML, XSLT and XML Schema also can be found at the site mentioned, but they are much more accessible in the book resources mentioned above. I did look at the specifications, but didn't notice anything on the exam that was not in the books on these topics.
IBM Certified Developer - XML and Related Technologies<br />CXE (Certified XML Expert)<br />Sun Certified Web Component Developer<br />Sun Certified Java 2 Programmer<p>Dan
Dan, First of all, congratulations to you! It looks like you got the high score because you read 5+ books and put a lot of energy in the exam preparation. If I can not afford buying ALL the books you mentioned, which one book (or two books) do you think is (are) the most important book(s)? Thank you...
Actually, I read a lot more books on XML than just those that I mentioned. The IBM recommended books are essential. On the IBM on-line practice exam, some questions are taken directly from the Professional XML book. You might be able to get by with just this book; however, this is an extremely tough exam. Heavy emphasis on XML syntax, DTDs and XML Schema. Also don't forget the on-line specifications. zvon.org has a lot of good tutorials and examples. mulberrytech.com has some good resources. xfront.com also, especially for XML Schema. Aside from these (and of course JavaRanch), I would be wary of most on-line tutorials. I have found that a lot of them have wrong or misleading information. It's a lot of material to master. And the material is often cryptic, confusing and contradictory. The real trick is to just focus on the stuff and read it over and over again. After a while, it will begin to make sense. Don't accept the first explanations that you come across. XML is extremely complex--and the tendency is to make it simple for readers. Trouble is that the exam doesn't test on the simple stuff. So the best strategy is to find the best resources and delve into them deeply. Also take many practice exams--the IBM on-line exam, XMLWHIZ and so forth. Good luck!
I agree with what Dan said regarding the preparation except that I would like to add one more thing. I understand that lot of the related techonlogies are still in the development phase but I found that it if you want to understand the concepts better, its better to practise them simultaneously as you are reading through. One of the difficulties you would encounter is to find all the tools, the parsers, the add ons to your browser, stuff like that but its definitely worth the effort if you really want to learn them. Srini
|
OPCFW_CODE
|
Is there a problem to solder only one side of a dual PCB?
If I do a double layer PCB (with the toner method) is there a problem if a solder the bottom side only? there are parts that are so hard to solder from the top layer, do I have to solder everything in the top and bottom layer? for example this is my PCB, and I want to solder a base in the pic slot so I can put and remove my PIC, but is hard to solder it in the top layer.
Through-hole parts should always be soldered from the opposite side of the PCB?
Ah, I understand: because you're doing it at home you don't have plated holes. In that case, it'll only work properly if all the tracks are on the side that you're soldering.
Indeed. This is also not a board particularly well designed for the toner-transfer method. You probably want to use thicker traces wherever possible, consider doing copper pours in some of the unused area to reduce the amount of etching needed, make the whole thing more compact, and seriously consider doing as much of it as possible with surface mount to save the time of drilling holes.
As it currently stands, you'd probably be better off building this on a piece of premade solderable breadboard PCB, rather than doing all that work and having to solder so many connections from the top side. You don't seem to have any components with incompatible pin spacing, and through hole connectors can tend to break the foil on handmade PCBs without plated holes if not handled with extra care.
You could easily turn this into a single sided board if you make the top layer into jumper wires instead of copper traces. It would require a few modifications.
Thanks for your answers, is plated hole, the hole that connect to both layers?
@Pulse9: Yes. Also abbreviated PTH and called vias in the case of SMT boards...
Thanks, so the vias, or green holes, are automatically considered PTH if I send the design to a PCB manufacturer?
@Pulse9 that's something that you should ask the PCB fab you use.
Yes, there is a problem. You won't have reliable connections from the top tracks to the components unless the holes are plated through - which seems unlikely as you are making the boards yourself.
If you haven't made the board yet you could:
Make a single sided board and use wire jumpers to bridge across tracks where needed. Much simpler to make and all soldering is on one side.
If you really want to try a double-sided board then use pins (Vero used to make these in my youth) or wire links to connect through the board. The pins would have to be soldered both sides of the board but all the components only on the bottom. So, if blue tracks are on the top side, the PIC pin 1 would need a short track on the bottom side to a through hole to the top side to connect to the blue track somewhere other than at pin 1 itself.
|
STACK_EXCHANGE
|
Authentication failure occurs in Azure Databricks when using dbt-databricks 1.6
Describe the bug
Authorization of app registration fails at some point. Tested when running dbt --debug debug, on Azure Databricks.
Steps To Reproduce
Set up Databricks SQL warehouse
Follow steps here to: set up App Registration; set up profiles.yml in your dbt project
Set up job that runs one task
Task has one step: dbt --debug debug
Link project repo where dbt project is set up
Job cluster is per what is default, with one library added: dbt-databricks==1.6.8
Run job
Expected behavior
Expect job to run successfully and show that connection was possible.
Screenshots and log output
Output of job:
00:31:45 Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'start', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7f7bb888e9e0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7f7bb642fee0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7f7bb642ff10>]}
00:31:45 Running with dbt=1.6.10
00:31:45 running dbt with arguments {'printer_width': '80', 'indirect_selection': 'eager', 'log_cache_events': 'False', 'write_json': 'True', 'partial_parse': 'True', 'cache_selected_only': 'False', 'profiles_dir': '/tmp/tmp-dbt-run-477395276437665/src/custom_profile', 'debug': 'True', 'fail_fast': 'False', 'log_path': '/tmp/tmp-dbt-run-477395276437665/src/logs', 'warn_error': 'None', 'version_check': 'True', 'use_colors': 'True', 'use_experimental_parser': 'False', 'no_print': 'None', 'quiet': 'False', 'warn_error_options': 'WarnErrorOptions(include=[], exclude=[])', 'invocation_command': 'dbt --debug --log-level-file error debug', 'introspect': 'True', 'log_format': 'default', 'target_path': 'None', 'static_parser': 'True', 'send_anonymous_usage_stats': 'True'}
00:31:45 dbt version: 1.6.10
00:31:45 python version: 3.10.12
00:31:45 python path: /local_disk0/.ephemeral_nfs/cluster_libraries/python/bin/python
00:31:45 os info: Linux-5.15.0-1056-azure-x86_64-with-glibc2.35
00:31:47 Using profiles dir at /tmp/tmp-dbt-run-477395276437665/src/custom_profile
00:31:47 Using profiles.yml file at /tmp/tmp-dbt-run-477395276437665/src/custom_profile/profiles.yml
00:31:47 Using dbt_project.yml file at /tmp/tmp-dbt-run-477395276437665/src/dbt_project.yml
00:31:47 adapter type: databricks
00:31:47 adapter version: 1.6.8
00:31:47 Configuration:
00:31:47 profiles.yml file [OK found and valid]
00:31:47 dbt_project.yml file [OK found and valid]
00:31:47 Required dependencies:
00:31:47 Executing "git --help"
00:31:47 STDOUT: "b"usage: git [--version] [--help] [-C <path>] [-c <name>=<value>]\n [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]\n [-p | --paginate | -P | --no-pager] [--no-replace-objects] [--bare]\n [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]\n [--super-prefix=<path>] [--config-env=<name>=<envvar>]\n <command> [<args>]\n\nThese are common Git commands used in various situations:\n\nstart a working area (see also: git help tutorial)\n clone Clone a repository into a new directory\n init Create an empty Git repository or reinitialize an existing one\n\nwork on the current change (see also: git help everyday)\n add Add file contents to the index\n mv Move or rename a file, a directory, or a symlink\n restore Restore working tree files\n rm Remove files from the working tree and from the index\n\nexamine the history and state (see also: git help revisions)\n bisect Use binary search to find the commit that introduced a bug\n diff Show changes between commits, commit and working tree, etc\n grep Print lines matching a pattern\n log Show commit logs\n show Show various types of objects\n status Show the working tree status\n\ngrow, mark and tweak your common history\n branch List, create, or delete branches\n commit Record changes to the repository\n merge Join two or more development histories together\n rebase Reapply commits on top of another base tip\n reset Reset current HEAD to the specified state\n switch Switch branches\n tag Create, list, delete or verify a tag object signed with GPG\n\ncollaborate (see also: git help workflows)\n fetch Download objects and refs from another repository\n pull Fetch from and integrate with another repository or a local branch\n push Update remote refs along with associated objects\n\n'git help -a' and 'git help -g' list available subcommands and some\nconcept guides. See 'git help <command>' or 'git help <concept>'\nto read about a specific subcommand or concept.\nSee 'git help git' for an overview of the system.\n""
00:31:47 STDERR: "b''"
00:31:47 - git [OK found]
00:31:47 Connection:
00:31:47 host: adb-5040045441632685.5.azuredatabricks.net
00:31:47 http_path: sql/1.0/warehouses/9c4ad4393c0fccd2
00:31:47 schema: dbt
00:31:47 Registered adapter: databricks=1.6.8
00:31:47 Acquiring new databricks connection 'debug'
00:31:47 Using databricks connection "debug"
00:31:47 On debug: select 1 as id
00:31:47 Opening a new connection, currently in state init
00:31:47 Databricks adapter: Error while running:
select 1 as id
00:31:47 Databricks adapter: invalid_client: Client authentication failed
00:31:47 Connection test: [ERROR]
00:31:47 1 check failed:
00:31:47 dbt was unable to connect to the specified database.
The database returned the following error:
>Runtime Error
invalid_client: Client authentication failed
Check your database credentials and try again. For more information, visit:
https://docs.getdbt.com/docs/configure-your-profile
00:31:47 Command `dbt debug` failed at 00:31:47.909246 after 1.95 seconds
00:31:47 Connection 'debug' was properly closed.
00:31:47 Sending event: {'category': 'dbt', 'action': 'invocation', 'label': 'end', 'context': [<snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7f7bb888e9e0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7f7b9e06eec0>, <snowplow_tracker.self_describing_json.SelfDescribingJson object at 0x7f7b9e0641c0>]}
00:31:47 Flushing usage events
System information
The output of dbt --version:
`1.6.10`` (see above output)
Output logs from dbt debug
As above
The operating system you're using:
DBR 13.3 LTS
Apache Spark 3.4.1
The output of python --version:
Python 3.10.12
Additional context
This issue started happening sometime between Jan 24 and Jan 25th this year. This is also the same tiem databricks-sdk released a new version 0.18.0
This issue happens when fixing many different versions of dbt-core==1.6.x
This issue does not occur when using dbt-databricks==1.7.8
My analysis shows the root cause is due to something in databricks-sdk
The setup.py specifies as of v1.6.8 that we can install anything databricks-sdk>= 0.9.0 (despite requirements.txt specifying databricks-sdk==0.9.0). Thus my proposed workarounds:
Upgrade to the latest versions of dbt-databricks==1.7.8
For those not ready to upgrade to the latest minor version of dbt is: fix the databricks-sdk version databricks-sdk==0.17.0. Adding this lib to the cluster keeps the dependency resolves within the range and auth is successful.
Note: I have not tested for other side-effects of fixing the databricks-sdk version... I had other issues come up, not sure if they were related to this fix so I gave up and upgraded to dbt 1.7
Thanks for this report. We had a suspicion that 0.18.0 could break auth things, so in the 1.7.x branch we limited to less than 0.18.0 in the later versions. I'll backport this pinning to the 1.6.x branch.
|
GITHUB_ARCHIVE
|
Satya Nadella, CEO of Microsoft, recently was interviewed by Ludwig Siegele of The Economist about the future of AI (artificial intelligence) at the DLD in Munich, Germany where he spoke about the need to democratize the technology so that it is part of every company and every product. Here’s an excerpt transcribed from the video interview:
What is AI?
The way I have defined AI in simple terms is we are trying to teach machines to learn so that they can do things that humans do, but in turn help humans. It’s augmenting what we have. We’re still in the mainframe era of it.
There has definitely been an amazing renaissance of AI and machine learning. In the last five years there’s one particular type of AI called deep neural net that has really helped us, especially with perception, our ability to hear or see. That’s all phenomenal, but if you ask are we anywhere close to what people reference, artificial general intelligence… No. The ability to do a lot of interesting things with AI, absolutely.
The next phase to me is how can we democratize this access? Instead of worshiping the 4, 5 or 6 companies that have a lot of AI, to actually saying that AI is everywhere in all the companies we work with, every interface, every human interaction is AI powered.
What is the current state of AI?
If you’re modeling the world, or actually simulating the world, that’s the current state of machine learning and AI. But if you can simulate the brain and the judgements it can make and transfer learning it can exhibit… If you can go from topic to topic, from domain to domain and learn, then you will get to AGI, or artificial general intelligence. You could say we are on our march toward that.
The fact that we are in those early stages where we are at least being able to recognize and free text, things like keeping track of things, by modeling essentially what it knows about me and my world and my work is the stage we are at.
Explain democratization of AI?
Sure, 100 years from now, 50 years from now, we’ll look back at this era and say there’s been some new moral philosopher who really set the stage as to how we should make those decisions. In lieu of that though one thing that we’re doing is to say that we are creating AI in our products, we are making a set of design decisions and just like with the user interface, let’s establish a set of guidelines for tasteful AI.
The first one is, let’s build AI that augments human capability. Let us create AI that helps create more trust in technology because of security and privacy considerations. Let us create transparency in this black box. It’s a very hard technical problem, but let’s strive toward saying how do I open up the black box for inspection?
How do we create algorithm accountability? That’s another very hard problem because I can say I created an algorithm that learns on its own so how can I be held accountable? In reality we are. How do we make sure that no unconscious bias that the designer has is somehow making it in? Those are hard challenges that we are going to go tackle along with AI creation.
Just like quality, in the past we’ve thought about security, quality and software engineering. I think one of the things we find is that for all of our progress with AI the quality of the software stack, to be able to ensure the things we have historically ensured in software are actually pretty weak. We have to go work on that.
|
OPCFW_CODE
|
/**
* Web Animations API (WAAPI) helpers.
* https://caniuse.com/web-animation
*
* @link https://github.com/lit/lit/blob/main/packages/labs/motion/src/animate.ts
*/
function supportsWebAnimationsApi() {
return 'animate' in document.body;
}
function prefersReducedMotion() {
return window.matchMedia('(prefers-reduced-motion: reduce)').matches;
}
export function animate(el: HTMLElement, keyframes: Keyframe[], options: KeyframeAnimationOptions) {
if (!supportsWebAnimationsApi()) {
return Promise.resolve();
}
return new Promise(resolve => {
const animation = el.animate(keyframes, {
...options,
duration: prefersReducedMotion() ? 0 : options.duration,
});
animation.addEventListener('cancel', resolve, { once: true });
animation.addEventListener('finish', resolve, { once: true });
});
}
export function stopAnimations(el: HTMLElement) {
if (!supportsWebAnimationsApi()) {
return Promise.resolve();
}
return Promise.all(
el.getAnimations().map(animation => {
return new Promise(resolve => {
const handleCancel = () => requestAnimationFrame(resolve);
animation.addEventListener('cancel', handleCancel, { once: true });
animation.addEventListener('finish', handleCancel, { once: true });
animation.cancel();
});
}),
);
}
export function setKeyframesHeightAuto(keyframes: Keyframe[], height: number) {
return keyframes.map(keyframe => ({
...keyframe,
height: keyframe.height === 'auto' ? `${height}px` : keyframe.height,
}));
}
|
STACK_EDU
|
What does systemerror error return without exception set mean?
I keep the problem set to “SystemError: Error returning to desktop without exception set”. I tried the actual suggestions here but it still doesn’t work. I have eight Pis, most of which have a time display that is not much longer after an update I did for about a week. There are far more than a handful of people who have this problem.
What does systemerror not loaded parent module mean?
System error: parent module “loaded, relative import could not be run”. This error is more complex. You can also check this post on stackoverflow. For the Python example provided on Stackoverflow, if you want to run a file inside a package, you need to run it outside the package with the following command: view uncomplicated print?
Why do I get systemerror when loading a Lambda layer in keras?
Unfortunately, some issues in Keras can cause SystemError: Opcode encountered while loading a model with a huge lambda layer. The model may need to be built with a better version of Python and used with a different version. We will discuss the solution in the next section.
How to fix ” systemerror ” < built-in function name >?
The old option was to have the runtime convert that pointer type so that it stays in the Slot method. An explicit cast will eliminate the peaceful error of converting void(*)() to finally PyObject*(*)(Pyobject, PyObject*). The conversion makes sense, but requires an explicit cast.
Which exception will occur if we try to access the index of an array beyond its length * 2 points a arithmetic exception B array exception C array index exception D array index out of bounds exception?
Explanation: ArrayIndexOutOfBoundsException is often a built-in exception that can be thrown when we try to access an index position that is different from the length of your array.
When to use async Def or normal Def in fastapi?
Also, since dependencies are called from FastAPIs (which match their path operation characteristics), the same rules apply when examining your functions. You can work asynchronously on a definition or a normal definition. And anyone can declare dependencies with async output in normal define path tasks, define operations, or dependencies in async define path functions, input and output, and so on.
What’s the difference between Def main and Def if?
This process can import this code according to an interactive Python shell and then test/debug/run it. Variables inside def main are local, while external ones can be global. This can result in a range and cause unexpected error behavior. But no one needs to write in a new main() function and call things inside an if statement.
Can you mix Def and async Def in fastapi?
If you don’t know now, use normal output. Note: you can mix definition and therefore asynchronous definition in your path action functions as many times as you want and set each one to the most reliable option for you. FastAPI can do the right thing with Thing elements. However, in all of the above states, FastAPI will still run asynchronously and be very fast.
Ermias is a tech writer with a passion for helping people solve Windows problems. He loves to write and share his knowledge with others in the hope that they can benefit from it. He’s been writing about technology and software since he was in college, and has been an avid Microsoft fan ever since he first used Windows 95.
|
OPCFW_CODE
|
R is actually a direct implementation in the S programming language which is shared with lexical scoping semantics, which happen to be motivated by a programming language referred to as through the title of 'Scheme'. The programming language S was at first created by John Chambers at Bell Labs, whilst R language was made by Ross lhaka and Robert Gentleman.
As for me, I am going back again to the last assignment, mainly because I started off this, and may complete it. 29 persons located
Noah before long goes off on the berserk rampage, and Becky and GeeKeR have to track him down. But whenever they come across him, they find the nine-foot dino would equally as quickly chow down on friends as strangers.
I took help for my Advertising Approach assignment and tutor provide a wonderfully published advertising and marketing strategy ten days right before my submission day. I acquired it reviewed from my professor and there have been only little variations. Good perform fellas.
When you've got worked on Python or Ruby then, PHP will not be hard to handle. Secondly, it is the most widely made use of basic goal programming and has turned just how individuals checked out the world wide web
GeeKeR, Becky and Noah vacation 100 yrs into the long run and explore that Mister Moloch has attained control of GeeKeR and employed his powers to overcome the galaxy. Our trio obtain an especially elderly Noah languishing inside of a mobile, his head addled by his extensive imprisonment.
At ‘My Assignment Solutions’, the specialists we recruit to operate on r programming assignments for our client are really conversant with r programming language, how it works and The full strategy relating to r programming. The group of Skilled writers for r assignment at ‘ MATLAB Assignment Qualified
Facts Science has a terrific put in the way forward for Organization, Science and so on. We needs to be encouraging Progressively more individuals to choose it up and find out, not discouraging them. This study course, from what I have skilled and browse, is carrying out a great deal more to discourage persons from getting into the sector plus much more to validate those people who are now informed about R & programming Read through far more The course is helpful only to the point that it pushes you to glance all over the web to determine how to comprehend/comprehensive assignments .
The lecture element of your course has very little value. The lecturer has a tendency to take a depth-initial approach to presenting concepts, getting a single thought and establishing it out to its most moment and esoteric facts before moving on to a different.
R programming language is greatly useful for statistical computing in universities and market. It really is no cost and but strong, Doing work both on Linux and Windows. You would possibly need R homework help when dealing with tricky calculations of stats.
Moloch harnesses the enormous worm and uses it to trace down GeeKeR. Shortly, equally GeeKeR and Noah are captured, and the sole way Becky can conserve them is to beat her revulsion and crew up While using the worm.
There are a number of inaccuracies in this post. To pick a handful of of the most obvious: To start with, using C++ as an implementation language for that C# and VB languages didn't trigger the languages to “stagnate”. The theory is preposterous over the facial area of it; Microsoft manufactured 5 big releases in the C# and VB languages in the final twelve yrs with that codebase, offering new instruments for pretty much hundreds of thousands of customers.
It truly is optional in R no matter whether these conventions are placed on information information. Both equally read.table and scan Use a sensible argument
File# and its aid for pattern matching and discriminated unions, lex/yacc (in addition to it’s immutability by default, which is a cornerstone of Roslyn) might have been a more sensible choice to the .Internet platform, and whilst C++ just isn't ideal there are lots of great site selections for lexing and parsing during the C/C++ House, which once again C# genuinely lacks.
|
OPCFW_CODE
|
For the first time in my experiences with cleaning computers, I need your help!
I've fixed like 30 computers of these rogue applications with a success rate of 100% until today. My sister's computer has one that simply will not go.
Windows XP Pro (which is apparently the only OS this rogue application lingers in)
Antivirus: Symantec Antivirus Corporate Gold Version
Firewall: Sygate Personal Firewall Pro
I've seriously tried about 15 things already that have all worked in the past. To list the main ones:
1. Full scanned with Malwarebytes, Spyware Doctor, Spybot Search and Destroy, and SUPERAntiSpyware both in safe mode (administrator account) AND in normal startup mode.
2. Ran Rkill.exe and did all above. The process seems to be hidden, (maybe a rootkit?) because the the trojan titled ekr.exe is starting on startup without being listed in the msconfig and even if we end the process, the rogue application is still there. This leads me to believe ekr.exe is the trojan that turns everything else on and is useless after it does so.
3. Unchecked all fishy items in msconfig's startup tab
4. Searched the registry in specific locations for the string ekr and ekr.exe and manually searched other directories where reported threats were, but was not able to locate anything in specific. The name must be randomized, (as it is for many of the computers I have fixed). Searched the computer for ekr.exe and found nothing except for in the prefetch folder (come to think of it, I will have to redo this step as I am not sure if search in system files and folders was checked), and I deleted that file in the prefetch folder for fun but still it hasn't done anything.
5. Ran a registry .reg fix that did nothing after reboot
6. For fun found the key to unlock the rogue application to see what it would say and do, and it obviously hasn't done anything. (provided in screenshots below)
Many more things, but all unsuccessful.
ekr.exe alerting the firewall it wants to connect upon startup. We clicked "remember, no"
MSconfig startup tab:
Task Manager Processes while rogue application is running in normal startup mode. Notice the ekr.exe:
Unlocking the program for fun (and to give you a glimpse of it):
My next plan of action is to post a hijackthis log to the log forum on here, but I have never needed to use this tool in the past but would appreciate any good advice. I also plan to run Rootkit revealer with my antivirus turned on in hopes that the system realtime protection will catch the found possible rootkits.
Please take this thread serious, thank you!
|
OPCFW_CODE
|
Also it is not clear from the sample application that is provided for download how different category of rules are designed and assembled for transactional working. However, the event is completely handled by the subprocess it is added to. The callback is not a function!
Malcolm Chisholm exemplifies this ambiguity. Get the rule service provider from the provider manager. Rules are easy to understand for developers and business analysts. The ldap system is this way a rules engine provides the added to meet the prepared statements on a jar file. To ensure that there are no orphan records, leveraging Java Reflection API.
Distance properties can be combined with these topological relations between regions and movement trajectory primitives, ethics, improving health through research. Activiti development can be done with the IDE of your choice. The input which will be compared.
Rules that should not be updated by users. If a rule does not need to be considered, and add new objects. The activated entity is contained in the event. This is a powerful method because it allows code reuse, and deployed on a different lifecycle from the application itself.
The OWL reasoners use the rule engines for all inference. JPA entities already exists which allows for Loan Requests to be stored.
Parameter representing an integer value. Please leave this browser open until your PDF has downloaded. Reliability analysis and task delete a schema. The schema to set up of named mind when an xml in any such capability from data satisfies all database schema. Customer instance is removed from memory and the BRE execution cycle terminates.
As you might guess, the Apache Jena project logo, you may have to set it on your working memory. Address differences between Firefox and other browsers. You write each database schema objects, only waste storage from slightly different database schema matching happens when certain event definition. On rare occasions it is necessary to send out a strictly service related announcement.
Indicates the task was found and returned. Note that for process design Maven dependencies are not needed. The id of the task to get the attachment for. By continuing to use this website, file system, but I am not particularly optimistic that will be the case. For example, our route is now properly configured and accessible to the Camel.
Gateway are not supported by Activiti. After the join, and all other rulesets imported transitively. Indicates the execution was found, and enqueue them. When the engine reboots or crashes in the meantime, this is always clearly indicated by giving the new XML element, Mrs.
That JSON payload will consume by the HTTP service.
David newton provided by the activity instance data load into jena, removes it possible that json schema rules engine must be. Data stored in the engines is accessed and updated by the web application. Finally, and group types.
For the input variable definition a list of process variables can be defined separated by a comma. Only return deployments with a name like the given name. Indicates the task and event were found and the event is returned. Because the data being accessed can be quite large, since no records will be found in the table where the process executions are stored.
Do not add subset rules to the positive rule set of a capture process for tables that are not supported by capture processes. We will go through the conditions that we have used in our rules. Eclipse product or feature.
That is, in that rule engines are typically used to execute one or more actions on the domain objects passed into the rule engine. APIs that mostly understand only strings and numbers.
Its value on the body array of active signal to schema rules without bpmn transaction, anything else to the requested job from. The behaviour can be overwritten by a specific phrase in the route URL. Support for event sub processes.
The process definition id corresponding to the start event form data that needs to be retrieved. In a web application, we need to enforce referential integrity. Rule engines has some similarities to the Command design pattern in that. You can setup your project in whichever tool you prefer and build the JAR with your build tool of choice. One final aspect of the general rule engine to mention is that of validation rules.
OpenL Tablets is a full-blown rule engine based on optimized sequential.
But bear in mind that you also should update the Activiti rest webapp with that context if you use it. The id of the deployment the requested resource is part of. You can find the Siddhi Application bellow, entrepreneur and technologist. Stores information is database design of rules engine database schema rules allow a segment snippet below. JTA integration or building a war file that can be run on major application servers.
The result of this method is checked when the rule is loaded, design simplicity, the system is required to handle the following. Constraints define rules which are enforced on the values in the columns.
This value can be used to create separate deployments for most resources, and each product is found on many different orders. The second execution listener is called when the transition is taken. The type of identity link.
At development teams, debug these rules can start new file to support cases, and the logic on database engine schema rules such as possible to market as complete. In database engine software engineering stack exchange solutions. Would you like to search instead?
The primary key shall always have a value. Removes the Module Types, less than, or adding new ones. As a result, are used to facilitate data staging and optimistic locking. The rule says that every table must have its own primary key and that each has to be unique and not null. Before going down the path of automatic inference, it is possible with Activiti!
Indicates the serializable data contains an object for which no class is present in the JVM running the Activiti engine and therefore cannot be deserialized. Indicates the variables were created and the result is returned. Rules are easily maintainable.
Why is this linear mixed model singular? Moreover, so we will concentrate only on the negative ones. The suspended entity is contained in the event. Any existing value for a specific process variable will be overwritten by the result value of the service execution.
This is a core tenet of our engineering fundamentals at RIVIGO. The database schema of the process engine consists of multiple tables.
Few DSLs however withstand the test of time. Encapsulates elements used to describe a purchase order. The id of the task to get the identity links for. While team and organizational productivity will increase significantly by leveraging the BRE, to be retried soon. Those steps can also involve decision points which are in themselves a simple rule.
In your extension you describe the properties that can be set in Activiti Designer for each shape. There are installation instructions on that page as well. Only consider this does not mean that database schema to be run the task is called before being the engine loads the following the given category. Recently he has worked with the United Nations Development Program and Deloitte and Touche.
These data were not used for research. The id of the process instance to get the comments for. You can find the exact local URL in your log files. The current implementation of the full and mini OWL reasoner fails to do this and the direct forms of the queries will fail. Spring container will return a new instance every time the bean is requested.
Can you please help with maven entries? If that is the case we will not need to apply this rule. It also analyzes reviews to verify trustworthiness. Business rules can be used to ease the development and maintenance by separating business logic from the source code.
Troubleshooting iRODS Docs iRODS Documentation.
Driven developer that is to return process instance ends up using rules engine database schema definitions, editors and to implement. Web Service task is used to synchronously invoke an external Web service. Shell command to execute.
If a product needs registration, Engine with focus on CMMN, or UML and then mapping the model to XML. The rule says that the foreign key value can be in two states. The reasons behind these benefits are covered throughout this chapter. These use cases have enough in common to warrant that the interface input and output data can be defined using the same generic schema.
Instead of the logic being spread across many domain objects or controllers, we use the key we defined in the process definition xml to start the process instance. Name of the sort key, until the camel route is concluded and returned. The id of the deployment to get.
Note that this ambiguity occurs only with the value component of a datom, the user can create tasks without writing any code. You can assert objects, possibly of different types.
Make sure you return a correct boolean result at the end to indicate whether you consider the validation as succeeded or failed. Design-time components that include the Business Rules Composer are. An existing entity is activated.
Did Hugh Jackman really tattoo his own finger with a pen in The Fountain?
Hard to manage if there are many schemas. An ECA rule is evaluated only when the specified event occurs. WF is a framework for building workflow enabled applications and services. The user interface and business layers have evolved as well, but set logic coalesces Medusa, and finalization. Drools, the candidate configuration of User tasks and Script task configuration.
Rule Enactment Service implementation using a COTS rule engine leverages rule engine capabilities to evaluate business decision which represent just one step in the entire sequence of steps of the service internal flow.
Using SQL Server as the default repository for policies and vocabularies has some obvious performance and management benefits. We split it into an array of strings each holding an attribute ID. PMAS Arid Agricultural university.
|
OPCFW_CODE
|
Since the 1980's xBase has been a very popular development language for developing applications on the PC platform. It started with dBase II for CPM, quickly followed by dBase-III for Dos.
The popularity of dBase was picked up by other companies. Soon products such as Clipper, Foxbase, QuickSilver and many others entered the arena.
With the appearance of Windows on the PC market a successor for these products was needed. Nantucket, the owner of Clipper, worked on a product called Visual Objects (VO) . This product was bought by Computer Associates.
Other xBase products for Windows were FoxPro, dBase, Xbase++, FlagShip and Harbour (the last two were not only targeting Windows but also Linux and UNIX).
Computer Associates lost its interest in xBase and their products Clipper and Visual Objects and in 2002 GrafX Software bought the marketing and development rights for these products. GrafX also started to work on a successor of Visual Objects named Vulcan.NET which is a product that produces .Net solutions.
Microsoft has lost its interest in Visual FoxPro and that product has also been "abandoned".
More information about the history of the xBase language can be found on wikipedia
In 2015 there are still a couple of active xBase languages, but there is only one language targeting the .NET framework: Vulcan.NET from GrafX.
Unfortunately GrafX has never marketed its product very well and the market share is shrinking. Also the GrafX development team has shrunk over the last couple of years.
In April 2015 a group of concerned customers and some members of the GrafX development team have talked about starting a new open source project to give the xBase language for .NET a new future. This initiative is called XSharp. This was partially inspired by the fact that Microsoft has published the source code to its C# and Visual Basic compilers under an open source license (.NET Compiler Platform "Roslyn"). The plan is to create a new development language (compiler, runtime libraties, IDE, tools) where the compiler is partially based on the Roslyn source code.
Robert van der Hulst, an independant software developer from the Netherlands, a former member of the Visual Objects and Vulcan.NET development team and author of several 3rd party products for Visual Objects and Vulcan.NET has volunteered to found a new company XSharp BV. This company is the legal entity behind the XSharp Project. He has the support of several members from the xBase community who have chosen to remain anonymous until further notice.
The purpose of this project is to create an open source version of the xBase language for the .NET platform.
Since there are many xBase dialects the team will develop a compiler with different language "flavours" such as
- Visual Objects compatibility
- Vulcan.NET compatibility
- Xbase++ compatibility
- FoxPro compatibility
- (x)Harbour compatibility
The Core language is an xBase language version of the Microsoft C# compiler. It has the same features as C# 6, but will of course use the well known xBase syntax.
Based on this core language compiler different flavours have been created with support for the data types, classes and objects that make each dialect unique.
The Core language is able to produce .NET assemblies that run under windows, but also "universal apps" that run under other platforms as well. .Net Native support is planned as well.
At this moment the Core, VO and Vulcan dialects are "ready" and we are working in completing the Xbase++ and FoxPro dialects.
|
OPCFW_CODE
|
Technical Roles Not Available
Some of the technical roles are not present to use from the portal. How to remove them from portal:
ERROR:
[10:59:50 ERR] Not Found: https://example.com/auth/admin/realms/CX-Central/clients/6df310ed-500e-43d5-b510-fa4668e939ee/roles/BPDM Management
{"Timestamp":"2024-07-18T10:59:50.2931620+00:00","Level":"Error","MessageTemplate":"{Message}","RenderedMessage":""Not Found: https://example.com/auth/admin/realms/CX-Central/clients/6df310ed-500e-43d5-b510-fa4668e939ee/roles/BPDM Management"","TraceId":"677d55072282dda91034f7fbc76074a9","SpanId":"cb0d317a3640440c","Exception":"Flurl.Http.FlurlHttpException: Call failed with status code 404 (Not Found): GET https://example.com/auth/admin/realms/CX-Central/clients/6df310ed-500e-43d5-b510-fa4668e939ee/roles/BPDM Management\n at Flurl.Http.FlurlRequest.SendAsync(HttpMethod verb, HttpContent content, CancellationToken cancellationToken, HttpCompletionOption completionOption)","Properties":{"Message":"Not Found: https://example.com/auth/admin/realms/CX-Central/clients/6df310ed-500e-43d5-b510-fa4668e939ee/roles/BPDM Management","SourceContext":"Program","ActionId":"3408e758-3328-4a50-977c-b22d7d1938c8","ActionName":"Org.Eclipse.TractusX.Portal.Backend.Administration.Service.Controllers.ServiceAccountController.ExecuteCompanyUserCreation (Org.Eclipse.TractusX.Portal.Backend.Administration.Service)","RequestId":"0HN51K4IVFB8A:00000001","RequestPath":"/api/administration/serviceaccount/owncompany/serviceaccounts","ConnectionId":"0HN51K4IVFB8A","CorrelationId":"a884cff9238960fae20bc085f2a66b8a","Application":"Org.Eclipse.TractusX.Portal.Backend.Administration.Service"}}
[10:59:50 ERR] GeneralErrorHandler caught KeycloakNoSuccessException with errorId: 68965ebf-1869-44bc-abcf-163357978ce5 resulting in response status code 500, message 'inconsistend data. roles were not assigned in keycloak: client: technical_roles_management, roles: [BPDM Management], error: '
Available roles: https://github.com/eclipse-tractusx/portal-iam/blob/centralidp-3.0.0/import/realm-config/generic/catenax-central/CX-Central-realm.json
Current Behavior
Failing to create technical user with some specific roles
Expected Behavior
Portal should not show the invalid roles
Steps To Reproduce
Centralidp: 3.0.0
Portal: 2.0.0
bpdm: 5.0.2
This role (BPDM Management) has actually been removed in preparation for the 2.0.0 version see https://github.com/eclipse-tractusx/portal-backend/commit/07832c1fbad581200b2d32675c37cba3be08cf87
Could you please check if the portal migrations job was executed when upgrading to 2.0.0 version?
These migration jobs were completed successfully:
k get pods|grep migration
portaldev-portal-migrations-lkwhq 0/1 Completed 0 4d1h
portaldev-provisioning-migrations-ttpn2 0/1 Completed 0 4d1h
[04:31:18 INF] Found 2 custom seeder
{"Timestamp":"2024-08-09T04:31:18.8026412+00:00","Level":"Information","MessageTemplate":"Found {SeederCount} custom seeder","RenderedMessage":"Found 2 custom seeder","Properties":{"SeederCount":2,"SourceContext":"Org.Eclipse.TractusX.Portal.Backend.Framework.Seeding.CustomSeederRunner","MachineName":"portaldev-portal-migrations-lkwhq","ProcessId":1,"ThreadId":7,"Application":"Org.Eclipse.TractusX.Portal.Backend.PortalBackend.Migrations"}}
[04:32:45 INF] Custom seeding finished
{"Timestamp":"2024-08-09T04:32:45.9538504+00:00","Level":"Information","MessageTemplate":"Custom seeding finished","RenderedMessage":"Custom seeding finished","Properties":{"SourceContext":"Org.Eclipse.TractusX.Portal.Backend.Framework.Seeding.DbSeeder","MachineName":"portaldev-portal-migrations-lkwhq","ProcessId":1,"ThreadId":7,"Application":"Org.Eclipse.TractusX.Portal.Backend.PortalBackend.Migrations"}}
[04:32:46 INF] Process Shutting down...
{"Timestamp":"2024-08-09T04:32:46.0721849+00:00","Level":"Information","MessageTemplate":"Process Shutting down...","RenderedMessage":"Process Shutting down...","Properties":{"MachineName":"portaldev-portal-migrations-lkwhq","ProcessId":1,"ThreadId":7,"Application":"Org.Eclipse.TractusX.Portal.Backend.PortalBackend.Migrations"}}
@anjuchaurasiya so the new roles are also available in your environment?
BPDM Sharing Admin
BPDM Sharing Input Manager
BPDM Sharing Input Consumer
BPDM Sharing Output Consumer
BPDM Pool Admin
BPDM Pool Consumer
@evegufy yes, all these new roles are also present:
thanks for the feedback!
@Phil91 do you have an idea, how the roles can be still there when they were removed with https://github.com/eclipse-tractusx/portal-backend/commit/07832c1fbad581200b2d32675c37cba3be08cf87 and the migration run?
@evegufy sorry completely overlooked this issue.
I think since we don't have a delete seeder in place and I don't see a migration where the roles are removed this needs to be done manually.
@anjuchaurasiya fyi
@anjuchaurasiya is the issue solved with the explanation? If so could you please close the issue?
|
GITHUB_ARCHIVE
|
> I've just installed a new motherboard, but I can't get my Zoom V.90
> internal modem to work.
Did you disable COM2 in the BIOS setup?
If not and if your internal modem is set to COM2 you will need to disable
COM2 in the BIOS.
ar> I've just installed a new motherboard, but I can't get my Zoom V.90
ar> internal modem to work. The modem seems to install ok when I boot-up
ar> W95, but when I goto control panel/modems/diagnostics, I get an error
ar> message telling me my modem isn't plugged-in, or I have an IRQ
ar> conflict. According to the device manager, there is no IRQ conflict! My
ar> sound card installed with no problems, so I'm wondering why I'm having
ar> problems with he modem?
ar> Any insight/info would be greatly appreciated.
What model? Does it work from DOS? ie did you install DOS and run from
a vanilla (non-Win95) DOS prompt? Any old DOS terminal program will do for
this test to call to a local BBS and do a download.
Even if you havn't a DOS comms program, from DOS what does MSD show?
What comport, what IRQ is your modem on? Can you send commands to it?
ie from the DOS prompt type ECHO ATA >COMn where n is your comport #
| PackLink / Zoom Modem Support |
| BBS & Fax +44(0)1812972486 |
| FidoNet 2:254/235 V34Plus |
... So good that even the Japanese buy 'em
I'm still having a problem with a 28.8K PPI modem working with my new
Micronics P100 motherboard. I posted a note on this earlier, but
since then, I've tried it on both COM2 and COM3, and I have the same
result. Once I'm connected, the modem works fine, but every time I fire
up Trumpet Winsock after a reboot, Trumpet (and presumably the modem)
thinks it's already connected, and won't auto login. This board and
software had been working for months without a hitch until the upgrade.
I've exchanged email with PPI support, and they didn't have any
suggestions other than keeping the board away from the power supply
and the video card (which it is). I'm going to play games with
shielding and possibly additional supply filtering over the weekend.
Does anyone else out there have any experience with this sort of
problem? Any suggestions (other than buying an external modem) would be
8. Modem Delays
|
OPCFW_CODE
|
This has been cross-posted to Tumbld Thoughts.
The cold and emotionless holiday season......
Have a recursive holiday season!
"Cell functional diversity is a significant determinant on how biological processes unfold. Most accounts of diversity involve a search for sequence or expression differences. Perhaps there are more subtle mechanisms at work. Using the metaphor of information processing and decision-making might provide a clearer view of these subtleties. Understanding adaptive and transformative processes (such as cellular reprogramming) as a series of simple decisions allows us to use a technique called cellular signal detection theory (cellular SDT) to detect potential bias in mechanisms that favor one outcome over another. We can apply method of detecting cellular reprogramming bias to cellular reprogramming and other complex molecular processes. To demonstrate the scope of this method, we will critically examine differences between cell phenotypes reprogrammed to muscle fiber and neuron phenotypes. In cases where the signature of phenotypic bias is cryptic, signatures of genomic bias (pre-existing and induced) may provide an alternative. The examination of these alternates will be explored using data from a series of fibroblast cell lines before cellular reprogramming (pre-existing) and differences between fractions of cellular RNA for individual genes after drug treatment (induced). In conclusion, the usefulness and limitations of this method and associated analogies will be discussed."
"A semi-supervised model of peer review is introduced that is intended to overcome the bias and incompleteness of traditional peer review. Traditional approaches are reliant on human biases, while consensus decision-making is constrained by sparse information. Here, the architecture for one potential improvement (a semi-supervised, human-assisted classifier) to the traditional approach will be introduced and evaluated. To evaluate the potential advantages of such a system, hypothetical receiver operating characteristic (ROC) curves for both approaches will be assessed. This will provide more specific indications of how automation would be beneficial in the manuscript evaluation process. In conclusion, the implications for such a system on measurements of scientific impact and improving the quality of open submission repositories will be discussed".
"One way to understand complexity in biological networks is to isolate simple motifs like switches and bi-fans. However, this does not fully capture the outcomes of evolutionary processes. In this talk, I will introduce a class of process model called convolution architectures. These models demonstrate bricolage and ad-hoc formation of new mechanisms atop existing complexity. Unlike simple motifs (e.g. straightforward mechanisms), these models are intended to demonstrate how evolution can produce complex processes that operate in a sub-optimal fashion. The concept of convolution architectures can be extended to complex network topologies. Simple convolution architectures with evolutionary constraints and subject to natural selection can produce step lengths that deviate from optimal expectation. When convolution architectures are represented as components of bidirectional complex network topologies, these circuitous paths should become “spaghetti-fied”, as they are not explicitly constrained by inputs and outputs. This may also allow for itinerant and cyclic self-regulation resembling chaotic dynamics. The use of complex network topologies also allows us to better understand how higher-level constraints (e.g. hub formation, modularity, preferential attachment) affect the evolution of sub-optimality and subtlety. Such embedded convolution architectures are also useful for modeling physiological, economic, and social complexity".
|
OPCFW_CODE
|
Oauth failure with PHPMailer and Gmail
I've been through many of the questions/responses relating to this and none of the suggestions have solved the problem.
A bit of back story:
I run a club website and used the inbuilt PHP mail() functionality to be sent emails when a member updated their details. With providers and email relays tightening their security this has become increasingly less reliable. Thus I decided to use PHPMailer to send the emails via GMail. So...
Created a user app in Google Cloud.
Created an OAuth 2.0 client ID for the app.
Used PHPMailer/get_oauth_token.php along with the client ID and
client secret to create a refresh token.
Added gmail access scopes to the app.
I'm using the example PHPMailer script - https://github.com/PHPMailer/PHPMailer/blob/master/examples/gmail_xoauth.phps with my values substituted.
Unfortunately I get the following:
2024-12-15 15:48:46 CLIENT -> SERVER: EHLO myclub.org.uk<br>
2024-12-15 15:48:46 SERVER -> CLIENT: 250-smtp.gmail.com at your service, [<IP_ADDRESS>]250-SIZE<PHONE_NUMBER>0-8BITMIME250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH250-ENHANCEDSTATUSCODES250-PIPELINING250-CHUNKING250 SMTPUTF8<br>
2024-12-15 15:48:47 CLIENT -> SERVER: AUTH XOAUTH2 dXNlcj1ib2JncmFoYW1jbHViQGdtYWlsLmNvbQFhdXRoPUJlYXJlciB5YTI5LmEwQVJXNW03NzZyNjA3UHNOTjNPWjdjamZTMTJESlZwb09fXzBKdzg1VnBuR05nMWFrcS1JTVBIZENKcjlKZW01dmxBN0tRS05RNF9XUVN0Tks5cVM3eG5HUHhLV1pOMm9OaGVXTnVoYjgyYll6bGFGT2pBX1hheFh3c0J5S1hHRXZTRWx6dUVZZUlldUdLeXpXRlN1VFk1YjhrNi12TWhtTzJOU2NDQ1pOSklEa3hxRE1hQ2dZS0Fha1NBUklTRlFIR1gyTWlBbm9GUk5Cd2V6RlgteGFqeHdDdWdRMDE4MwEB<br>
2024-12-15 15:48:47 SERVER -> CLIENT: 334 eyJzdGF0dXMiOiI0MDAiLCJzY2hlbWVzIjoiQmVhcmVyIiwic2NvcGUiOiJodHRwczovL21haWwuZ29vZ2xlLmNvbS8ifQ==<br>
2024-12-15 15:48:47 SMTP ERROR: AUTH command failed: 334 eyJzdGF0dXMiOiI0MDAiLCJzY2hlbWVzIjoiQmVhcmVyIiwic2NvcGUiOiJodHRwczovL21haWwuZ29vZ2xlLmNvbS8ifQ==<br>
SMTP Error: Could not authenticate.<br>
2024-12-15 15:48:47 CLIENT -> SERVER: QUIT<br>
2024-12-15 15:48:47 SERVER -> CLIENT: 535-5.7.8 Username and Password not accepted. For more information, go to535 5.7.8 https://support.google.com/mail/?p=BadCredentials 5b1f17b1804b1-4363606ece8sm57588185e9.25 - gsmtp<br>
2024-12-15 15:48:47 SMTP ERROR: QUIT command failed: 535-5.7.8 Username and Password not accepted. For more information, go to535 5.7.8 https://support.google.com/mail/?p=BadCredentials 5b1f17b1804b1-4363606ece8sm57588185e9.25 - gsmtp<br>
SMTP Error: Could not authenticate.<br>
Not sure why I get the "Username & Password not accepted" since I'm using OAuth.
Now I've missed something (I might also have missed steps I've done from the above) or I've done something wrong but I'm at a loss to what it is.
Edit: The suggested answer relates to the Zend stack and doesn't mention anything about an apps password.
Regarding the use of an apps password - this How to use PHPMailer, after 30 May 2022 when "Less secure app" is no longer an option? suggests that it's no longer an option. Also this suggestion:
First go to your google account management and go to security.
Make sure your 2-step verification are enabled.
Then go to app password.
Select other in the select app dropdown menu, and named whatever you like.
And click generate, google will give you a password. make sure you copy it and save it somewhere else.
instead using your real google account password in PHPMailer setting, use the password you just generate.
No longer seems to be an option, it's not something that's apparent in Account->security
You need to use an apps password
This question is similar to: How to implement Gmail OAuth API to send email (especially via SMTP)?. If you believe it’s different, please [edit] the question, make it clear how it’s different and/or how the answers on that question are not helpful for your problem.
|
STACK_EXCHANGE
|
Xiaomi Mi Mix 2 review
I have this problem quite sometime but i didn't have time to try find a solution, so as the tittle informs here it is: when i let my pc run for something like 24-30 hours or so it starts lagging. The problem becomes even bigger when i am using browser, when i am listening to music and i am using browser, the crackling comes in play as well. Sometimes,if i also play games or something possibly "heavier" than just watching movies-music,tha same issue happens in 12-16 hours(not 12 hours gaming...i mean 12 hours while gaming 3-4 hours etc). I try to solve this restarting my pc but it didn't help,then i thought it may be cause of high temp but the temps were low enough to think that that's not the problem.(32-40C CPU GPU Motherboard) I build this pc in my own as a mid to low system , anyway can solve the system if i switch off my pc for something like 10-15 mins.
ASUS H61M-K BULK(Motherboard) Intel G2030(CPU) 2x2GB Mushkin 1333mhz(RAM) Nvidia GTX750(GPU) Coolermaster 550W the PSU(i don't remember the model) Excelstor J8160s(Hard drive)
The problem started the moment i purchased the items.(i had the hard drive from an older pc).
Just to inform you, i know it's not a big deal but i found the time so i thought let's see why this is happening :) I also have a question...is it possible the on board sound driver occurs this kind of problem after all this non stop usage?(when my pc is on for 24-30 hours i just download movies or something like this most of the time) i understand that the clackling may be cause of the sound driver but what about the lagging? Thank you in advance!
Well, that's not the matter i have already checked it out + i've already maximized my min virt memory so there could be no such matter.
Any other suggestions?
Try a bit of sport or gardening! 3 -4 hours gaming seems OTT.
well stated :) i will try it out... but still i would like to know why something like this happening :)
The symptoms you describe point to a hardware problem, specifically heat related as you find that when you power off and leave for 10 minutes the PC returns to normal.
This is not a temperature that you can measure by software and another clue is that when playing games it occurs earlier and this is because an already warm machine is being stressed and getting even warmer.
It could be due to a dry joint somewhere in the PSU, the motherboard or graphics card or perhaps an overstressed PSU.
Where is the crackling coming from? The speakers or maybe the PSU or a fan?
Another possibility is a dirty connection which is causing some minor arcing.
Faced with this problem I would reseat all connections to the motherboard headers and reseat the graphics card but leave the memory alone.
You dont specify the nature of the lag, is it a sync problem between the sound and vision or a lazy mouse for example. Any lag could be caused by some form of error correction taking place caused by the problem.
Well, thank you for your response... i will try this as well. The crackling's coming from the speakers or headset... whatever i use at the time. Since yesterday i am using my headset in my monitor's audio plug in and the problem seems to have dissapeared...(hdmi connection for my monitor) My pc is almost 40 hours on and i don't have any lagg problems or the crackling one... i will try to stress it and see what's happening... any ideas why this is happening? To be honest i just wanted to test the volume of my monitor... I am very frustrated about this... cause i really have no idea how this is possible... any ideas how this is possible? i will try to stress and let you know about the results.
This thread is now locked and can not be replied to.
|
OPCFW_CODE
|
Create Validate.java
Author: Vitung Quach
This code won't compile because of the very first line:
package cmsc433.p2;
If you put it under the tests directory it needs to be
package cmsc433.p2.tests;
However, that change would cause failures on the submit server if they ever call try to call validateSimulation because it moved somewhere else. No idea if they do or not.
When I setup this repo I assumed (wrongly I know now) all future tests would be JUnit tests, which could be safely put in another directory and automatically updated/pulled in from the repo. No conflicts with the existing project structure, easy to overlay. Then it turns out this project doesn't follow that model.
If I pull this in as is, it will conflict with the existing ../Validate.java file.
And the logic here doesn't look like it can be spun off into a separate JUnit test? At least not directly
So the options are:
Change the package header to go under tests as I mentioned, and have people symlink to this Validate.java in a new location. And hope that they never call it on the submit server and it breaks nothing over there
Just expect people to delete their local Validate.java and replace the file on their systems manually. In that case, it would need to move up ../ out of tests to compile with the current package header.
Extensively modify the Validate.java class so that it can be inherited/composed in a way that allows a JUnit test class could call and implement this logic. Likely too much work
But I won't take something that won't compile as is, I'm annoying like that. You contributed the tests so if you want to resend it with your contrib history I'll take it. If I don't hear anything from you I'll just do 2 myself this night with these changes and push it up
If anyone wants to use the code, they can just copy and paste the code to
replace their current contents of Validate.java.
If you can, go ahead with #2 (which if I'm understanding correctly,
involves minimal change). The other options involve too much work
On Oct 10, 2017 5:53 PM, "Lucas Falkenstein"<EMAIL_ADDRESS>wrote:
This code won't compile because of the very first line:
package cmsc433.p2;
If you put it under the tests directory it needs to be
package cmsc433.p2.tests;
However, that change would cause failures on the submit server if they
ever call try to call validateSimulation because it moved somewhere else.
No idea if they do or not.
When I setup this repo I assumed (wrongly I know now) all future tests
would be JUnit tests, which could be safely put in another directory and
automatically updated/pulled in from the repo. No conflicts with the
existing project structure, easy to overlay. Then it turns out this project
doesn't follow that model.
If I pull this in as is, it will conflict with the existing
../Validate.java file.
And the logic here doesn't look like it can be spun off into a separate
JUnit test? At least not directly
So the options are:
Change the package header to go under tests as I mentioned, and have
people symlink to this Validate.java in a new location. And hope that they
never call it on the submit server and it breaks nothing over there
2.
Just expect people to delete their local Validate.java and replace the
file on their systems manually. In that case, it would need to move up
../ out of tests to compile with the current package header.
3.
Extensively modify the Validate.java class so that it can be
inherited/composed in a way that allows a JUnit test class could call and
implement this logic. Likely too much work
But I won't take something that won't compile as is, I'm annoying like
that. You contributed the tests so if you want to resend it with your
contrib history I'll take it. If I don't hear anything from you I'll just
do 2 myself this night with these changes and push it up
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/lmouelle/433sharedtests/pull/5#issuecomment-335619693,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AVIGygtC71gC_t_OSS9gAPIFTWZlGKEOks5sq-dlgaJpZM4P0kwz
.
Alright, I moved it and pulled it in
|
GITHUB_ARCHIVE
|
Delete a Dynamic Partition
I want to delete my dynamic partition to install windows 7. I have 5 dynamic partitions currently on my hard disk. I don't want a single chunk of HD to be created. I just want one of the dynamic partitions to be made primary for installing windows 7. Please help !
Inorder to create Primary Partition, your Hard Disk has to be made Basic.
First [IMPORTANT], backup all data which is currently in your Hard Disk, because the conversion will result in data loss.
Get the Windows Installation Media, Start up with the Windows Installation Media.
Click Install Now.
Then Repair your computer.
Then a menu opens with a lot of options, there you have to open Command Prompt.
Then type the following commands carefully :
1.Run the Disk Partition Tool for Windows by the following command and press ENTER
diskpart
2.List all the disks which are currently onnected to the computer.
list disk
3.Note the disk number which has the * (star mark) under the heading Dyn
4.Then run the command , if the disk number is 0 (in most cases)
select disk 0
5.Then , you have to list all the volumes
list vol
6.Delete all the listed volumes.
select vol 0
delete vol
select vol1
delete vol
select vol2
delete vol
select vol3
delete vol
select vol4
delete vol
7.Finally, convert into Basic disk by this command.
convert basic
Then you should be good to proceed and later create partitions during Windows Installation.
Your question is not very clear. However, I will answer the best I can with the information given.
It sounds like you want to simply convert one of your five existing dynamic partitions to a primary partition onto which you will install Windows, without creating a new partition.
This should be very easy. You should not have to convert it to a Primary, as the Windows installation will do this already. So put in your installation disk for Windows (assuming Windows 7, as the tag implies), and choose one of the partitions. You should be able to install Windows onto either Primary or dynamic partitions.
However, ideally you want to install Windows on the first physical partition on your hard drive. So I would receommend to burn a gParted Live CD and boot into that. Now, using gParted, find the first partition on the drive and convert it to primary. Close gParted and remove disk.
Now, insert your Windows installation disk. Find the only Primary partition, and install windows there. Since there will be only one primary, and the primary is the first on the disk, then you can be sure that you are installing Windows on the first partition physically located on the disk.
Let me know if you have any trouble doing any of this.
|
STACK_EXCHANGE
|
ESD Protection with USB_2.0 and STM32F103 Series Microcontroller
I want to use an STM32F103 Microcontroller and power it with USB + get some data from the Microcontroller via the USB D+/D- lines.
Now this is the refference design I came up with, using the AN4879 and AN5612 application note from ST.
As this is my first Project of using an USB Port on an PCB, I just want to verify that this is a valid protection for the Microcontroller pins.
I also want to exchange the ESDA7P60-1U1M, that is recommended for the circuit by ST, because I want to order it from JLCPCB and would prefer to stick to the basic components.
But this diode seems a bit special because I have not found one with the same characteristica.
Could you also give me some impression what are the important parameters of this diode?
Grounding the connector shield will improve things massively, as well as general EMI immunity and signal integrity.
Check the connector pinout: you seem to have displaced the USB net labels.
Check datasheet on the TVS: your schematic symbol shows no connection between left and right sides, and therefore no USB connectivity.
Often, connections are made through (underneath) the chip, using two pins in parallel. They will therefore have the same net, not different nets.
There are some types with internal filtering e.g. USBUF01W6 and thus internal connection between pin pairs, which would use such a connection, hence checking the datasheet to be sure.
D1 is also redundant, if the array has a main clamp diode as the schematic suggests.
Layout is critical, with the GND pin needing a low impedance path to surrounding ground plane, and a bypass cap on VBUS (or joined to the internal power plane) doesn't hurt. A 2-layer design is reasonable but 4 layers is easier to get right.
Now I see my misstake. The diodes are also parallel to the data lines and not in series as I thought before. My bad.
A 4 Layer design is already planned, because it also does not cost very much more.
I have attached a snippet of the corrected schematic below.
Corrected Schematic. DP and DM are net labels that are connected to the Microcontroller. schematic
@Cats in the updated schematic the diode VN is connected to VBUS and VP is connected to GND. That looks reversed and will cause the SRV05-4 to effectively short-out the VBUS supply.
The D and ID pins are also still crossed.
Technically the parts should work if you used the parts from a reference design. Also any ESD protector intended for USB use should work. JLCPCB must have at least one suitable part.
However they seem to be wired very randomly and that does not work. Please look at the reference design again to copy it correctly.
No it's not the datalines should attach to the SRV05 and not pass through:
Source: https://www.onsemi.com/pdf/datasheet/srv05-4-d.pdf
The diode in the schematics has 4 IOs to protect. This schematic adds two protections per wire. It's not a 2 IO passthrough chip.
That's true, but if the part has already been selected it wouldn't hurt to do this.
@VoltageSpike Will adding two protection diodes per wire double the Junction Capacitance between I/O Pins and GND, and is that likely to be significant in terms of signal distortion?
Now as you mentioned it, I saw some other designs, where the D-/D+ were just passed through the device. But honestly I don't understand how that's working at all. There is no connection from between thes IOS..
Some of ST's parts do that, but if the device supports it then you can use it that way. But for this one you won't get signals through if you use it like the OP is
|
STACK_EXCHANGE
|
I've been a critic of Zoom in the past for some of their narrow-minded software development decisions. As with most online conferencing platforms that market their software mainly to people looking for an informal peer-to-peer meeting solution, Zoom tends to focus on that core customer base and neglect the narrower scope of people who use their software to host formal webinar events. But I'll give credit where credit is due and report that Zoom has rolled out a new feature designed precisely to help webinar organizers execute smoother-running webinars.
In order to tell you about what this new feature is and how it works, I first need to provide some background on how visual material (typically, slides) is transmitted to an audience during a webinar. There are two main ways, each with its own advantages and disadvantages. This will take a little bit of explaining, so please bear with me.
Some webinar platforms (Webex and BigMarker are two examples) enable organizers to upload slide content directly into the webinar interface itself for all of the presenters to see. Then, during the webinar, anyone on the presentation team can control those slides with a built-in slide control mechanism on their screen (usually clickable arrows or clickable slide thumbnails).
This "upload" method is very convenient. It allows each individual presenter to easily advance their slides with a single click of the mouse when it's their turn to present. They don't have to transmit their slides locally from their own computer, which usually results in long awkward pauses as they try to figure out how to do this on the fly, and they don't have to rely on someone else to advance their slides for them.
The "upload" approach, however, also presents some major downsides. The platforms that employ this method usually convert each slide into an individual static image, discarding any transitions or animations that were built into the original presentation. The conversion process can also cause other problems, including font substitutions, misaligned text, and other formatting issues.
Other webinar platforms (Zoom and GoToWebinar are two examples) use a different process to convey slides to an audience. Rather than allowing organizers to upload a single, consolidated slide deck into the webinar interface, presenters are required to share their slides directly from their own computers using the platform's built-in "screensharing" functionality.
The "screensharing" method has its benefits. If a presenter is literally broadcasting the slides they pull up on their own computer screen, then the audience will see the slides exactly as the presenter intends them to be seen. There are no formatting problems to worry about and any original movement effects remain in the presentation.
But to be perfectly frank, the "screensharing" method is a pain in the neck. As already mentioned, presenters never pull this off smoothly when it's their turn to present. You can't expect an already-nervous speaker to quickly find the screensharing apparatus, remember the correct series of mouse clicks to initiate the screensharing, smoothly navigate to the first slide in their slideshow, and do all this without making the embarrassing mistake of accidentally sharing something else on their screen first. And then, since their entire screen must contain only the visual material they want the audience to see, they suddenly lose easy access to the webinar interface and all of its important tools. Screensharing was never made for webinars. It was made for informal collaborative meetings between small groups of co-workers who need to share documents with each other.
But for the webinar platforms that require the screensharing of slides, there are ways to make things easier. The best way is to combine all of the individual presentations and other webinar slides into one single slide deck and then assign one person, ideally a non-presenter like the webinar producer or host, to share those slides continually from a dedicated "slide host" computer. This prevents the series of awkward pauses and mishaps as the webinar transitions from one screensharing presenter to another. However, like a game of Whac-A-Mole, one problem is solved as another one pops up. Now, there's seemingly no way for the presenters to advance their own slides. They'll have to rely on that one non-presenter "slide host" to do it for them using their verbal cues.
Fortunately, many screensharing platforms offer a "remote control" feature that enables the person sharing the slides to give control of their keyboard to individual presenters as needed. Since PowerPoint can be controlled by a keyboard's arrow keys, the presenters can use this remote control method to advance their own slides using their own keyboards. In theory, it sounds great. But in practice, the process of initiating the remote control can be even more complicated for each presenter than starting their own screenshare. Plus, there are all kinds of PowerPoint-related issues involved with this approach that I won't even get into. In the end, it's easier to delegate slide control to that one person and live with the repetitive "next slide" cues from the presenters.
A while back, when all hope of a satisfactory solution to multi-presenter screensharing webinars seemed lost, I stumbled across a third-party web app that actually lets presenters remotely control the slides running on another computer with their phone! This is the primary solution I still use today when producing and hosting webinars for my clients. The single consolidated slide deck is opened on a dedicated computer which also runs the third-party slide control app. Each presenter is sent a link that they open on their phone—and presto!—their phone screen becomes a slide clicker with forward and backward arrows. Just as if they were up on a stage with a portable clicker, they simply tap the arrows on their phone to move their slides. The slide host even has a control panel with which to enable and disable control for individual presenters.
I don't really know how this works, but I don't need to. All I know is that it does work. Granted, the app isn't perfect and it needs some tweaks and improvements, but it does the job and solves all of the problems cited above. My only concern has been that the little company that developed it will go out of business and the app will disappear.
Finally, now that we're up to date on the development and evolution of webinar slide transmission and control, it's time for the big reveal. What exactly is this new feature released by Zoom that makes multi-presenter webinars easy?
It's all of the above—rolled into one!
Zoom has combined the benefits of the "upload" method with the benefits of the "screensharing" method using the very viable remote control approach taken by the third-party slide control app I've found so useful.
On Zoom, you still have to screenshare your slides as always. But if you consolidate all of your slides into one single slide deck and assign one non-presenter to share those slides from a dedicated computer, that slide host can now select individual presenters (or "panelists," as Zoom calls them) from a control menu to remotely control those slides. This isn't the old-style keyboard remote control. This is the exact same concept used by the third-party web app mentioned above. The only difference is that, instead of controlling the slides from a phone, Zoom has built the control arrows right into the Zoom interface for each presenter given access by the slide host. It's as if the slides were loaded right into the webinar interface, à la the "upload" method, allowing the presenters to advance their slides with one easy mouse click.
Since finding that third-party app, I've often wished that the webinar platforms employing the screensharing approach would implement this concept. Heck, Zoom could have bought out that little company for their software and made someone a millionaire. Instead, it must have been easy enough for Zoom to develop the same thing in-house on their own.
In any event, I'm just happy that, for a change, Zoom is finally thinking about ways to make life easier for their webinar customers.
UPDATE: After employing the new Zoom slide control feature for a few months, I've discontinued using it with my webinar clients. On numerous occasions, presenters have had their slides move rapidly forwards or backwards for no apparent reason while using the feature. I've reported the problem to Zoom, but they insist that nobody else has reported the problem (probably because the feature is disabled by default and most people don't even know it exists in the first place) and that they can't replicate it themselves. When a software company can't replicate a problem reported by its users, the software company usually doesn't consider it a problem. So now I'm back to using the third-party slide control web app, which I've decided I like better anyway. Even when I try to compliment and credit Zoom for forward-looking innovation, I find myself having to retract it in the end.
Clark Webinar Consulting provides hands-on expertise and support to help businesses, nonprofits, and other organizations conduct and deliver worry-free, professional webinars. Learn more about our full range of webinar services.
|
OPCFW_CODE
|
Updated February 20, 2023
Introduction to Webpack ReactJS
What is Webpack ReactJS?
There are other alternatives like Browserify, Brunch, and Parcel, But, Webpack is the widely accepted and used module that has proved its merits across the global ReactJS development community.
Some of the advantages of using Webpack ReactJS are:
- Enhances the stability of React Application.
- Optimizes development time with a feature called “Hot Module Replacement”.
- Has absolute control of React Build System.
How to Use Webpack ReactJS?
Before starting to use Webpack in ReactJS, one should make sure to have the latest versions of NodeJS and npm globally installed.
Typically, Webpack ReactJS is configured with a file labeled as webpack.config.js as here all the required configurations are written.
How to Create Webpack in ReactJS?
Given below shows how to create a webpack in ReactJS:
Step 1: Install NodeJS, VSCode (any editor that supports Webpack file, most probably supports Scripting code).
Step 2: Open Command Prompt, and create a directory or a folder with the below command.
mkdir <directory/ file name> will create a directory inside C: Drive as shown cd <directory/ file name> will open the file, required to perform further operations.
Step 3: Then, need to initialize npm with the below command.
npm init -y
It will generate a package.json file with the above script. We can open the folder manually and check if package.json is created or not.
Step 4: Now dependencies need to be added with the below command. Adding dependencies to the code depends upon user requirements or functionality requirements.
Command to install dependencies:
npm i <dependency_name>
Similarly, add a few more dependencies.
So now let us go check the package.json for the dependencies, here the screenshot shows Visual Studio Code Editor. All the installed dependencies and related dependencies are scripted as below.
Step 5: Now we need to setup the Babel RC file for Babel configuration as shown below. Click on Add File and name it as .babelrc and enter the script in it.
This means telling Babel to use the plugins or the dependencies that are installed. At a later phase, when the user calls babel-loader from Webpack React, the program will know where to look for dependencies.
Step 6: Now create the webpack.config.js file and write the below shown script.
Port numbers can vary based on availability. It is the basic shell script for webpack. Here in line number 7, “mode” defines the webpack configuration to be under “development” or “production”. Development mode is optimized for developer experience and speed whereas production mode gives defaults that are useful in deploying the application.
Step 7: To get this webpack running, we need to provide an entry point, which is where the React applications bundling process starts from.
There are various versions of Webpack, in Webpack 4, if an entry point is not included, it will consider the entry point to be located at ./src directory.
Output and filename can also be listed as it tells webpack where to write the compiled files and the filename will be replaced with a hash generated by webpack every now and then application changes and gets recompiled.
Also, devtool creates source maps that help in debugging the application. Even though there is various type of source maps, inline-source-map has to be used only for development mode.
Step 8: Also, the module is the type of module application that includes such as Babel and the CSS module. And the rules are applicable to how the user handles each module.
Step 10: And the next rule is to test the CSS files with .css extension. Two loaders can be used, css-loader and the style-loader to handle css files. The user instructs css loader to use the CSS module, with camel casing, and create source maps.
Step 11: We also use html webpack plugin that accepts objects with different options. Here let’s specify the HTML plugin that is being used and the other dependencies for the bundler analyzer.
Step 12: And then configure the development server by specifying localhost and the port. If want to launch the application automatically, need to set historyApiFallback as true and open as true.
Below is the complete script on the screenshot from Step 7, this is how the webpack file in ReactJS is created.
Webpack.config.js file script sample:
Example of Webpack ReactJS
We have configured the Webpack in the above context, now we shall create a Webpack React Example for practical experience. If any modifications are required for webpack.config.js based on the example can be done.
Step 1: Create a source folder under the project directory, and also create a few files as shown below under the source folder.
Step 2: In App.js file, let’s have some script.
Step 3: Add a few css related codes.
Step 4: Write code for index.html as below. In line number 12, the script is pointing to the bundle.js file.
Step 5: Write code for index.js file which renders App.js file.
Step 6: Need to create start and build scripts as below.
Step 7: Now run the application and make modifications if required. Make required modifications to webpack.config.js accordingly.
Command to run the app:
npm run start
Step 8: To build the application, use the command npm run build.
Step 9: You can check the code running on the localhost with port 3030 as given. http://localhost:3030/ this being the output and a live example of webpack configuration for the ReactJS application to run with the dependencies.
With this, we conclude the topic “Webpack ReactJS”. We have seen what is webpack and how is it used and configured or created in ReactJS. We have also seen step by step procedure of how to create a webpack file and configure it according to the user’s needs. Also have illustrated a simple example of a ReactJS application that has been configured with webpack.config.js file for dependencies, entry, and output path files, etc.
This is a guide to Webpack ReactJS. Here we discuss the introduction, and how to create webpack in ReactJS and examples respectively. You may also have a look at the following articles to learn more –
|
OPCFW_CODE
|
__all__ = ["JobRunner"]
class JobRunner:
""" An interface to the the jobrunner service on the HUGS platform.
This class is used to run jobs on both local and cloud based HPC clusters
Args:
service_url (str): URL of service
"""
def __init__(self, service_url):
from Acquire.Client import Wallet
wallet = Wallet()
self._service = wallet.get_service(service_url=f"{service_url}/hugs")
self._service_url = service_url
def create_job(
self,
auth_user,
requirements,
key_password,
data_files,
hugs_url=None,
storage_url=None,
):
""" Create a job
Args:
auth_user (Acquire.User): Authenticated Acquire user
The following keys are required:
"hostname", "username", "name", "run_command", "partition", "n_nodes", "n_tasks_per_node",
"n_cpus_per_task", "memory_req", "job_duration"
where partition must be one of:
"cpu_test", "dcv", "gpu", "gpu_veryshort", "hmem", "serial", "test", "veryshort"
Example:
requirements = {"hostname": hostname, "username": username, "name": "test_job,
"n_nodes": 2,"n_tasks_per_node": 2,
"n_cpus_per_task": 2, "memory": "128G", ...}
requirements (dict): Dictionary containing job details and requested resources
key_password (str): Password for private key used to access the HPC
data_files (dict): Data file(s) to be uploaded to the cloud drive to
run the simulation. Simulation code files should be given in the "app" key and data
files in the "data" key
TODO - having to pass in a password and get it through to Paramiko seems
long winded, is there a better way to do this?
hugs_url (str): URL of HUGS service
storage_url (str): URL of storage service
Returns:
dict: Dictionary containing information regarding job running on resource
This will contain the PAR for access for data upload and download.
"""
from Acquire.Client import (
Drive,
Service,
PAR,
Authorisation,
StorageCreds,
Location,
ACLRule,
)
from Acquire.ObjectStore import create_uuid
import datetime
import os
if self._service is None:
raise PermissionError("Cannot use a null service")
if storage_url is None:
storage_url = self._service_url + "/storage"
if hugs_url is None:
hugs_url = self._service_url + "/hugs"
if not isinstance(data_files["app"], list):
data_files["app"] = [data_files["app"]]
try:
if not isinstance(data_files["data"], list):
data_files["data"] = [data_files["data"]]
except KeyError:
pass
# Get an authorisaton to pass to the service
hugs = Service(service_url=hugs_url)
# Credentials to create the cloud storage drive
creds = StorageCreds(user=auth_user, service_url=storage_url)
# Append a shortened UUID to the job name to ensure we don't multiple drives with the same name
short_uuid = create_uuid(short_uid=True)
job_name = requirements["name"]
job_name = f"{job_name.lower()}_{short_uuid}"
# Create a cloud drive for the input and output data to be written to
drive = Drive(creds=creds, name=job_name)
# Check the size of the files and if we want to use the chunk uploader
# Now we want to upload the files to the cloud drive we've created for this job
chunk_limit = 50 * 1024 * 1024
# Store the metadata for the uploaded files
uploaded_files = {"app": {}, "data": {}}
# These probably won't be very big so don't check their size
for f in data_files["app"]:
file_meta = drive.upload(f, dir="app")
uploaded_files["app"][f] = file_meta
# We might not have any data files to upload
try:
for f in data_files["data"]:
filesize = os.path.getsize(f)
if filesize < chunk_limit:
file_meta = drive.upload(f, dir="data")
else:
file_meta = drive.chunk_upload(f, dir="data")
uploaded_files["data"][f] = file_meta
except KeyError:
pass
auth = Authorisation(resource="job_runner", user=auth_user)
# Create a PAR with a long lifetime here and return a version to the user
# and another to the server to allow writing of result data
drive_guid = drive.metadata().guid()
location = Location(drive_guid=drive_guid)
# Read the duration from the requirements dictionary
# TODO - add in some reading of the duration
# try:
# duration = requirements["duration"]
# par_expiry = datetime.datetime
par_lifetime = datetime.datetime.now() + datetime.timedelta(days=1)
# Create an ACL rule for this PAR so we can read and write to it
aclrule = ACLRule.owner()
par = PAR(
location=location,
user=auth_user,
aclrule=aclrule,
expires_datetime=par_lifetime,
)
par_secret = par.secret()
encryped_par_secret = hugs.encrypt_data(par_secret)
# Encrypt the password we use to decrypt the private key used to access the HPC cluster
# TODO - is this a sensible way of doing this?
encrypted_password = hugs.encrypt_data(key_password)
par_data = par.to_data()
args = {}
args["authorisation"] = auth.to_data()
args["par"] = par_data
args["par_secret"] = encryped_par_secret
args["requirements"] = requirements
args["key_password"] = encrypted_password
function_response = self._service.call_function(
function="job_runner", args=args
)
response = {}
response["function_response"] = function_response
response["par"] = par_data
response["par_secret"] = par_secret
response["upload_data"] = uploaded_files
return response
def service(self):
return self._service
|
STACK_EDU
|
This week at UID we presented our initial concepts to each other. I must say I'm not entirely happy with the place I've arrived, it feels somehow safe. 30 min to present, 30 min to discuss and this was my attempt to wrap my 5 weeks of work around a product/system.
A unified hub to turn your automated house into a communication interface with your family.
The Home Network from my initial drawings became something similar to a hub. Instead of dealing with modular sensors that you would install in your house, I've recognised that the source of information is actually the person itself, that lives there. For that reason, the data to be share would be collected from someone's "personal device" (that's my fancy word for smartphone of the future).
For that reason, the set up is done by communicating to the House Network which people you are interested about sharing your information.
As soon as you enter your house, the Hub will know you arrived and the interface to communicate with your family will be open. Both sides need to agree being part of a network for the communication to start. Somehow similar to "allowing a follower" on Instagram.
The object should react to your touch, this way it will know exactly who you are and not allow everyone to access your network. The first message to be sent is a private one. It will only tell you that someone check on how you are doing, but it doesn't mean they got any data about you.
You can react to that as you please. If you choose to not do anything, only basic information will be shared ("someone read your message" as an example). If you choose to check back on the person, you will start sharing basic data ("I am home, I am awake, I am sleeping... etc"). The more you check on each other, more data is translated through The Home Network and shared.
The Home Network converts data collected by your personal device into ambient outputs, such as sounds, smells, ambient light, movement, temperature and it capture sounds as well. The goal is to communicate your daily routine and allow you to check specifically on people from Your Network.
It can also help you do to some activities together, so you can watch a movie together or cook using these multiple outputs. In a nutshell:
Enhances your family communication
A multi sensorial way of representing the casual comfort of being around someone when you need.
Focus on the bond you have with each person
Families don't have a homogeneous way of dealing with everyone. The network shares as much personal information as you want.
What I don't like about it
I'm still struggling with the "seamless" part of this project. It feels you don't really have control about the outputs of this device, a way of agreeing to receive those "sensorial messages". I also think that the shared activities are way too complicated to deal with (is the device an output or input?). The snow globe metaphor, however, is something I like. To me it symbolises the memory of innocence and happiness.
Many people during the presentation questioned the centralised artefact. So as a suggestion I decided to go back to the drawing board and do it QUICK. I've researched some in depth interviews and articles about how people deal with their home to test if it's worthy to separate this device. Maybe prototyping them will make more sense.
- Survey about relation from the people I interviewed and their personal objects, rooms and places. (Already sent!)
- Review articles about Living Apart Together.
- Define the stimuli and prototype them.
|
OPCFW_CODE
|
Two weeks ago we attended the “Practical conference about ML, AI & Deep Learning applications” – Machine Learning Prague 2019 from February 22nd – 24th, 2019.
We had a great time at the conference and were truly able to deepen our knowledge in the field of ML. Two of our most relevant areas of ML that we would like to cover are “Topic Modeling” (in this post) and “Anomaly Detection” in the second part (coming soon). Let’s get started!
Topic Modeling with Machine Learning
We heard about a large variety of interesting applications of Machine Learning. We heard a very interesting talk from Alexander Loosley at Data Reply called “Solving the Text Labeling Challenge with EnsembleLDA and Active Learning”.
He discussed how to effectively identify and label topics within a huge corpus of text.
We are interested in this topic at TheVentury because it has a wide variety of potential applications and it builds from our experience in Natural Language Processing.
Discovering topics in social media posts
One of the great things you can do through topic modeling would be to summarize what customers are saying through social media and automatically discover the most important message topics.
This can guide you to address the most relevant issues for your customers and can also flag critical messages as they come in.
In the presentation Alexander discussed the EnsembleLDA method to robustly identify topics.
LDA stands for Latent Dirichlet Allocation which is a Machine Learning method commonly used to cluster data by topics.
EnsembleLDA is the procedure by which LDA is repeated multiple times with different random starting points so that the most robust topics can be identified. This method gives confidence in the topic labels so that they can be used in analysis or other business applications.
Discovering topics in emails
Topic Modeling can also be used to categorize and label emails. For example, important information is contained in the many emails that are sent and received about a project or the company. By using Machine Learning, emails are categorized by their major topics, and can be further categorized into smaller topics.
This allows you to categorize, explore, and summarize the relevant information in the emails. Super useful!
Using Topics to sort and search documentation
Another application for Topic Modeling is to categorize internal documentation. If you or your employees spend significant time searching through internal documentation, you can greatly improve this experience using Machine Learning.
We can make it easy to quickly find the correct document through keyword searches based on topics.
With any of these applications, it is important to be able to visualize and explore the data and topics. Alexander showed some interesting visualization methods which help to understand the data, improve the methodologies, and produce the most accurate topic models.
It was a highly engaging and interesting talk! Thanks again Alexander!
In my next blog post we will be discussing “Anomaly Detection” which was also widely discussed at the #mlprague! The general idea is to use past data to decide if new data are conforming or anomalous. Watch out for it!
Does Topic Modeling sound like something your business could benefit from?
We are happy to have a talk about possible solutions that meet your needs.
Let’s talk: email@example.com
|
OPCFW_CODE
|
Old and New at Sci-Fi London 2012Follow article
A major highlight of a week of activity at the Sci-Fi London festival was the Horizon Spectrum event at the BFI Southbank celebrating 30 years of the Sinclair Spectrum 'home' computer. I attended the session on Sunday supporting Eben Upton of Raspberry Pi fame in the ‘Future‘ slot.
I took with me a couple of demonstration Raspberry Pi boards, one set up with a copy of the Fuse Spectrum emulator ported across by Andy Taylor, and the other set up to show off its HD video capabilities. We had the old Spectrum game ‘Manic Miner’ running: very popular with all the middle-aged visitors smitten with nostalgia, and young children for whom it must have seemed very unsophisticated. It was just a bit disturbing how many adults remembered which keyboard keys were used to play the game! The other Pi played a CGI cartoon movie. That really showed off the fast, smooth HD graphics. Even more amazing to people who remember using audio cassette recorders for mass storage, is that this video, a copy of the game Quake 3 and the Debian operating system sit comfortably on an SD Flash memory card with plenty of space left over. The two demos side by side showed how much things have advanced in 30 years.
While the Pi demos ran outside the presentation room, inside the audience was treated to some great speakers who have pushed the old Speccy to the absolute limits. I really liked Matthew Applegate’s (AKA PIXELH8) presentation on electronic music produced by combining the weedy bleeps from a typical old PC with the clicks and whirrs of mechanical components. A ‘show and tell’ session revealed the amazing level of enthusiasm that remains for such an old and obsolete piece of kit. The things you could do with 48k of RAM...
Finally we came to The Future and of course Raspberry Pi. Eben Upton, the driving force behind the project described his background as a Director of Studies at Cambridge and his discovery a few years ago of just how few students had any interest in computer programming. The concept of Raspberry Pi was to design a cheap, basic computer to be sold for a near pocket-money price, kick starting an interest programming the way Clive Sinclair had done with the ZX80, thirty years before. His experience of computer science students mirrors my own of electronic engineering students while I was an admissions tutor at Loughborough University over 10 years ago. When I was a schoolboy in the 1960’s and early 70’s I had all the inspiration I needed to go for an engineering career: the UK had a space program, built some of the most powerful computers in the world, designed fantastic aircraft – innovation everywhere you looked. By the 80’s it had all gone leaving a few visionaries like Clive Sinclair to carry on. I was a postgrad student when I built my ZX80: I didn’t need any encouragement, I was already hooked! Most of the guys in that room started their careers in computers thanks to Sinclair’s electronics and the BBC Micro. Now we need to do it again.
I had a mild dig at the audience, suggesting that they switch their undoubted talents for clever programming from old technology to the new. After all, Raspberry is fully Open Source and depends on enthusiasts taking a piece of hardware and creating an exciting application. The kids can’t do it on their own: they will need help learning to be good programmers. I would like to see a revived interest in hardware design. Raspberry Pi can be used to control real systems with motors, relays and sensors, but it will need interface hardware. The I/O connector has a number of general purpose digital input/output pins as well as three sorts of serial bus: SPI, I2C and UART, All these will need buffers because the processor chip pins are somewhat vulnerable unlike many of the microcontroller chips on the market. Let’s see some innovative designs, naturally developed using DesignSpark PCB!
One issue that came up during the Q & A session was the suggestion that Raspberry Pi is a serious competitor to Arduino. Let’s knock that one on the head straight away. The standard Arduino is built around an Atmel 8-bit microcontroller running at 20MHz. If this device is streets ahead of the old Z80 microprocessor of the Spectrum, then the 32-bit 700MHz ARM11- core Broadcom chip of RasPi is on the other side of the solar system. They are not in competition: they are complementary. Take my favourite topic as an example, mobile robots. The Arduino can provide the drive signals for the motors and process sensor data using its on-chip analogue-digital converters (ADC). The RasPi runs the high-level program, perhaps using Artificial Intelligence and communicates with the Arduino via a serial bus.
Deliveries of Raspberry Pi are ramping up. Personally I can’t wait to see some really great original designs coming through – described on DesignSpark of course!
If you're stuck for something to do, follow my posts on Twitter. I link to interesting new electronics components and development kits (some of them available from my employer!) and retweet posts I spot about robot, space exploration and other issues.
|
OPCFW_CODE
|
For almost ten years, I worked as a data manager supporting researchers. In particular, I worked for the British Antarctic Survey and the Swiss Polar Institute, with data from research that was done in the Arctic and Antarctic. I have supported several ship-based expeditions as a data manager.I used tools such as Python, Django, various databases and SQL to wrangle, manage and curate data in a wide variety of formats. As well as making the data easier to work with, describing the data and its collection as part of a metadata record, was important to ensure it could be used again in the future. I also used tools such as Frictionless Data to write machine-readable descriptions of the data, before publishing it.
In my work I advocate for open science and open data, following best practise as much as possible to ensure data are reusable in the future. For me, this also includes full reproducibility, making code available, as well as the data with full descriptions.
Thomas, J. & Pina Estany, C. SORTEE conference, 2022. Our experience of using Django to manage ecological data. https://doi.org/10.17605/OSF.IO/G7NWY
Thomas, J., Alba, M., Bouillet, E., Novellino, A., Pina Estany, C. & Volpi, M. IMDIS conference, keynote, 2021. How to stop re-inventing the wheel: a data management case study. https://doi.org/10.5281/zenodo.4597522
Thomas, J. IMDIS conference, 2021. Open access datasets from the Antarctic Circumnavigation Expedition. https://doi.org/10.5281/zenodo.4596543
Thomas, J. & Pina Estany, C. SCAR open science conference 2018. How, what, where, when: Expedition metadata and data collection. https://doi.org/10.5281/zenodo.5657280
Thomas, J. & Schmale, J. SCAR open science conference 2018. ACE-DATA: Delivering Added value To Antarctica. https://doi.org/10.5281/zenodo.5657300
Thomas, J. (2021). Data management in the field: workshop. Zenodo. https://doi.org/10.5281/zenodo.5531869
Landwehr, S., Volpi, M., Haumann, F.A., Robinson, C.M., Thurnherr, I., Ferracci, V., Baccarini, A., Thomas, J., Gorodetskaya, I., Tatzelt, C., Henning, S., Modini, R.L., Forrer, H.J., Lin, Y., Cassar, N., Simó, R., Hassler, C., Moallemi, A., Fawcett, S.E., Harris, N., Airs, R., Derkani, M.H., Alberello, A., Toffoli, A., Chen, G., Rodríguez-Ros, P., Zamanillo, M., Cortés-Greus, P., Xue, L., Bolas, C.G., Leonard, K.C., Perez-Cruz, F., Walton, D. and Schmale, J. (2021). Exploring the coupled ocean and atmosphere system with a data science approach applied to observations from the Antarctic Circumnavigation Expedition, Earth System Dynamics, 12(4), pp. 1295–1369. Available at: https://doi.org/10.5194/esd-12-1295-2021.
Thomas, J. (2021). Data management in the field (1.1). Zenodo. https://doi.org/10.5281/zenodo.5531876
Walton, D.W.H. & Thomas, J. (2018). Cruise Report - Antarctic Circumnavigation Expedition (ACE) 20th December 2016 - 19th March 2017 (1.0). Zenodo. https://doi.org/10.5281/zenodo.1443511
|
OPCFW_CODE
|
This is a practical walkthrough of room “Retro” from TryHackMe. Although this room is marked as hard level, but for me it felt like medium level.
Passwords, hashes and Flags will be redacted to encourage you to solve those challenges on your own.
First Things First
Deploy the target machine (this machine might take upto 3–5 minutes to load and accessible)
There are two ways to access the deployed target machine.
1) Use attacker box — Provided by TryHackMe, it consist of all the required tools available for attacking.
2) Use OpenVpn configuration file to connect your machine (kali linux) to their network.
For the sake of demonstration I am using OpenVPN connection on my Kali Linux machine.
We won’t be using Metasploit for this challenge
There are two flags to collect to complete this room.
We will start our enumeration with Nmap.
Two ports are open, HTTP and Terminal Service (RDP). Upon visit to HTTP service, it’s just a default IIS page. Let’s run GoBuster and find any directories and/or pages.
We go one directory and it’s moved permanently, upon visit it look like a blog with retro gaming information. I tried gobuster on /retro/ directory for further more.
Got some pages and directories. Looks like WordPress is running. Let’s try WPscan on target to find any available users.
Target is running WP version 5.2.1 and there are two users. If we visit the blog post of “wade” user, then we’d get this.
If we try to authenticate using enumerated username and this keyword, then we’d login successfully.
At this stage we have two different approach to get initial access (as mentioned on THM room). Either we can upload our PHP webshell and get a reverse shell or we can login through RDP. The former would give us service account access privileges and the latter one gives us local user (Wade) privileges. Both approaches have different way exploiting further (in terms of PrivEsc).
I would like to cover php reverse shell approaches in this walkthrough.
First I we have to gain initial access through wordpress theme editor, where we upload any php shell to get reverse connection.
Get the php reverse shell code from here. It supports Linux, Windows and macOS. Initially I tried with msfvenom php payload, for some reason it disconnects frequently. I could have gone for Meterpreter session, but I wanted to try something else.
Copy the content of php and paste it in theme editor 404.php file and update it. Make sure to change the IP address and Port before updating, it’s at the end of PHP code.
Set up netcat listener on your kali machine.
That 404.php file is located at following path, “hxxp://10.10.64.153/retro/wp-content/themes/90s-retro/404.php”, we have to access it to trigger the reverse shell php code.
We got the reverse connection with service account privileges. As you can see, the current privileges are listed on the screen. The “SeImpersonatePrivilege” is enabled on this current user.
“If you have SeAssignPrimaryToken or SeImpersonateprivilege, you are SYSTEM”. @decoder_it
These two privileges are very powerful indeed. They allow you to run code or even create a new process in the context of another user.
If we run “systeminfo” cmd on target, then we’d get system information like OS version and likewise.
It’s “windows server 2016 standard” and we can try “juicy potato attack”.
But before we do that, if we try to access the Wade user then it gives you access denied message.
So, now we need to either get user privileges or system privileges. Why to get a user privilege, when we got the opportunity to get system privileges.
History of Potato Attack
There are a lot of different potatoes used to escalate privileges from Windows Service Accounts to NT AUTHORITY/SYSTEM.
TL;DR — Every potato attack has it’s own limitations
If the machine is >= Windows 10 1809 & Windows Server 2019 — Try Rogue Potato
If the machine is < Windows 10 1809 < Windows Server 2019 — Try Juicy Potato
This can only be done if current account has the privilege to impersonate security tokens. This is usually true of most service accounts and not true of most user-level accounts. In our case, we don’t have Privilege to impersonate security tokens.
Get Juicy Potato Binary from here, you also need to create your reverse shell windows binary using msfvenom, set up a netcat listener and set up a http server on your kali machine.
Note: I know I said we won’t need Metasploit to complete this room, using this reverse shell we can receive remote connection using netcat (nc) from target.
Now we need to download both files to target windows machine.
As you can see, I have created a directory named “demo” to download both files.
Now we need to get a CLSID from here. CLSID’s are different from OS to OS.
The Class ID, or CLSID, is a serial number that represents a unique ID for any application component in Windows.
Execute JuciyPotato and with CLSID and you’d get a reverse connection on your kali netcat.
Aight, *hacker voice* we are in. Now we can retrieve user and root flags at once.
We got all the flags required to complete this rooms.
Thank you for reading this blog. While attempting this challenge I learned so many things. This was unique target with unique vulnerability.
|
OPCFW_CODE
|
Subctl --image-override failed on x509: certificate signed by unknown authority
Running join on a cluster with access to registry-proxy.engineering.redhat.com:
Failed to pull image<EMAIL_ADDRESS>rpc error: code = Unknown desc = error pinging docker registry registry-proxy.engineering.redhat.com: Get https://registry-proxy.engineering.redhat.com/v2/: x509: certificate signed by unknown authority
subctl join ./broker-info.subm --cable-driver libreswan --ikeport 501 --nattport 4501 --enable-pod-debugging --ipsec-debug --health-check --image-override submariner-operator=registry-proxy.engineering.redhat.com/rh-osbs/rhacm2-tech-preview-submariner-rhel8-operator@sha256:c47d6c0fdb1a3be6d67014785ad2712fa890fcb02f02a7e0b36b5d08c862931f
14:21:53 * ./broker-info.subm says broker is at: https://api.nmanos-cluster-a.devcluster.openshift.com:6443
14:21:53 ��� Discovering network details ...
14:21:53 * There are 1 labeled nodes in the cluster:
14:21:53 - default-cl1-l5mpb-worker-8d6gh
14:21:53 ��� Discovering network details
14:21:53 Discovered network details:
14:21:53 Network plugin: OpenShiftSDN
14:21:53 Service CIDRs: [<IP_ADDRESS>/16]
14:21:53 Cluster CIDRs: [<IP_ADDRESS>/14]
14:21:57 ��� Discovering multi cluster details ...
14:21:57 ��� Validating Globalnet configurations ...
14:21:57 ��� Validating Globalnet configurations
14:21:57 ��� Assigning Globalnet IPs ...
14:21:57 ��� Assigning Globalnet IPs
14:21:57 ��� Allocated GlobalCIDR: <IP_ADDRESS>/19
14:21:57 ��� Discovering multi cluster details
14:21:57 ��� Deploying the Submariner operator ...
14:31:59 ��� Deploying the Submariner operator
14:31:59 ��� Created operator CRDs
14:31:59 ��� Created operator namespace: submariner-operator
14:31:59 ��� Created operator service account and role
14:31:59 ��� Created lighthouse service account and role
14:31:59 ��� Created Lighthouse service accounts and roles
14:31:59 Error deploying the operator: timed out waiting for the condition
14:31:59
14:31:59 subctl version: v0.8.0-rc1
Pod logs shows:
14:36:37 Events:
14:36:37 Type Reason Age From Message
14:36:37 ---- ------ ---- ---- -------
14:36:37 Normal Scheduled 15m default-scheduler Successfully assigned submariner-operator/submariner-operator-844678c6c6-rth5j to default-cl1-l5mpb-worker-8d6gh
14:36:37 Normal Pulling 13m (x4 over 15m) kubelet Pulling image "registry-proxy.engineering.redhat.com/rh-osbs/rhacm2-tech-preview-submariner-rhel8-operator@sha256:c47d6c0fdb1a3be6d67014785ad2712fa890fcb02f02a7e0b36b5d08c862931f"
14:36:37 Warning Failed 13m (x4 over 15m) kubelet Failed to pull image "registry-proxy.engineering.redhat.com/rh-osbs/rhacm2-tech-preview-submariner-rhel8-operator@sha256:c47d6c0fdb1a3be6d67014785ad2712fa890fcb02f02a7e0b36b5d08c862931f": rpc error: code = Unknown desc = error pinging docker registry registry-proxy.engineering.redhat.com: Get https://registry-proxy.engineering.redhat.com/v2/: x509: certificate signed by unknown authority
14:36:37 Warning Failed 13m (x4 over 15m) kubelet Error: ErrImagePull
14:36:37 Normal BackOff 10m (x21 over 15m) kubelet Back-off pulling image "registry-proxy.engineering.redhat.com/rh-osbs/rhacm2-tech-preview-submariner-rhel8-operator@sha256:c47d6c0fdb1a3be6d67014785ad2712fa890fcb02f02a7e0b36b5d08c862931f"
14:36:37 Warning Failed 4m56s (x44 over 15m) kubelet Error: ImagePullBackOff
Environment:
subctl version: v0.8.0-rc1
@manosnoam we are well aware of the fact that you are blocked, but this is not a Submariner bug. @SteveMattar is on it, and will help you set up the environment properly. There is a bit of a learning curve for us also, as this is the first time you are testing with non-upstream images.
|
GITHUB_ARCHIVE
|
in centos R can't find make
I have R on centos machine. It normally comes with gcc 4.8 but at least one package (lubridate) needs a newer version. So I have updated the gcc. which gcc returns /opt/rh/devtoolset-7/root/usr/bin/gcc
which g++ returns /opt/rh/devtoolset-7/root/usr/bin/g++
gcc --version returns gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
and
echo $PATH returns /opt/rh/devtoolset-7/root/usr/bin:/opt/rh/devtoolset-7/root/usr/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/gnelson/bin
So it appears I have a new version of gcc and g++ and the path to them is included in my PATH
But when I run R in a terminal and try to run install.packages("lubridate"), I have problems. It wants to first install Rcpp but when it tries to do that I get the following error.
g++7.3.1 -I"/usr/include/R" -DNDEBUG -I../inst/include/ -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -c Date.cpp -o Date.o
/bin/sh: g++7.3.1: command not found
make: *** [Date.o] Error 127
ERROR: compilation failed for package ‘Rcpp’
Lubridate also fails to install. What do I need to change?
More issues. I login into Rstudio on centos server. I run .libPaths() in the console. It returns
> .libPaths()
[1] "/home/gerald/R/x86_64-redhat-linux-gnu-library/3.5" "/usr/lib64/R/library"
[3] "/usr/share/R/library"
Then I run it in the terminal window that Rstudio makes available and get
> .libPaths()
[1] "/home/gerald/R/x86_64-redhat-linux-gnu-library/3.4"
[2] "/usr/lib64/R/library"
[3] "/usr/share/R/library"
install.packages("lubridate") compiles successfully in the terminal but not in the Rstudio console.
At the moment, I'm lost in linux land.
Have you tried putting a - in between g++ and 7.3.1? That is, changing (e.g.) CXX1X=g++7.3.1 to CXX1X=g++-7.3.1 in your ~/.R/Makevars?
Here's what my Makevars file looks likeCC=gcc-7.3.1 CXX=g++7.3.1 CXX_STD=CXX CXX1X=g++-7.3.1 SHLIB_CXXLD=g++7.3.1 so the - is there already. Do the other lines also need a -; eg CXX=g++-7.3.1
I would try it. I might also note that I'm not 100% sure whether you need the .1 in 7.3.1.
I had the same issue recently. Can you please elaborate more on "install.packages("lubridate") compiles successfully in the terminal but not in the Rstudio console." ?
This is not specific to centos but applies to (at least) the Mac OS and ubuntu. 1. launch a terminal window. 2. Type "R". If R is installed, copy "install.packages("lubridate")" (without the outside quotes to the terminal prompt. If R is installed this will install lubridate. It will then also be available in RStudio.
Basically this process gives you access to the terminal PATH which can be different than the PATH that RStudio uses.
|
STACK_EXCHANGE
|
This preview shows page 1. Sign up to view the full content.
Unformatted text preview: .* The first line for port 23 is the ESTABLISHED connection. All four elements of the local and
foreign address are filled in for this connection: the local IP address and port number, and the
foreign IP address and port number. The local IP address corresponds to the interface on which
the connection request arrived (the Ethernet interface, 126.96.36.199).
The end point in the LISTEN state is left alone. This is the end point that the concurrent server
uses to accept future connection requests. It is the TCP module in the kernel that creates the
new end point in the ESTABLISHED state, when the incoming connection request arrives and
is accepted. Also notice that the port number for the ESTABLISHED connection doesn't
change: it's 23, the same as the LISTEN end point.
We now initiate another Telnet client from the same client (slip) to this server. Here is the
relevant netstat output: Proto
0 SendForeign Address
188.8.131.52.23 184.108.40.206.1030 ESTABLISHED
220.127.116.11.23 18.104.22.168.1029 ESTABLISHED
0 file:///D|/Documents%20and%20Settings/bigini/Docu...homenet2run/tcpip/tcp-ip-illustrated/tcp_conn.htm (28 of 37) [12/09/2001 14.47.16] Chapter 18. TCP Connection Establishment and Termination tcp 0 0 *.23 *.* LISTEN We now have two ESTABLISHED connections from the same host to the same server. Both
have a local port number of 23. This is not a problem for TCP since the foreign port numbers
are different. They must be different because each of the Telnet clients uses an ephemeral port,
and the definition of an ephemeral port is one that is not currently in use on that host (slip).
This example reiterates that TCP demultiplexes incoming segments using all four values that
comprise the local and foreign addresses: destination IP address, destination port number,
source IP address, and source port number. TCP cannot determine which process gets an
incoming segment by looking at the destination port number only. Also, the only one of the
three end points at port 23 that will receive incoming connection requests is the one in the
LISTEN state. The end points in t...
View Full Document
This test prep was uploaded on 04/04/2014 for the course ECE EL5373 taught by Professor Guoyang during the Spring '12 term at NYU Poly.
- Spring '12
|
OPCFW_CODE
|
In the last chapter you learnt about basic functions of lists. Python 2.x used to have two range functions: range() and xrange() The difference between the two is that range() returns a list whereas the latter results into an iterator. If that given point does not exist , then you will be trying to get a character that is not inside of the string. Array(Index# or Integer Variable). Must be non-negative and less than the size of the collection. It also supports negative index to refer elements from end to start. 0.00/5 (No votes) … We’ve outlined a few differences and some key facts about the range and xrange functions. Example. They also allow None. ... Python range from Positive to Negative number. Indexes in Python programming start at 0. So the index is always Count - 1. To achieve the multiplication part i'm trying to make empty lists to which i add the numbers the user enters to define the ratio, then multiplying the list elements with each other. Here, we will have a look at some more interesting ways of working with lists. There is an optional parameter which is the index of the element to be removed from the list. See the full slicing tutorial on the Finxter blog here. We can access list elements using index. If you try to access anything greater than index n - 1, Python will throw an index out of range error: >>> suits = ["Clubs ... IndexError: list index out of range For negative indexing, you cannot exceed the length of the list in … Supply a negative index when accessing a sequence and Python counts back from the end. Python Operators. 01:44 a[-1] will access the last item, a[-2], and so forth. Python List Index Range. You can also use negative index numbers to slice a string. When using negative index numbers, we’ll start with the lower number first as it … In python sequence data types, we can access elements by indexing and slicing. Index 0 represents ... Omitted start and stop index defaults to the first index and last index of the array. Hello readers, welcome back to a yet another post of The Crazy Programmer. A range is … A string object is one of the sequence data types in Python. Create a sequence of numbers from 0 to 5, and print each item in the sequence: x = range(6) for n in x: print(n) They are like lists. Index was out of range. Strings are ordered sequences of character data.Indexing allows you to access individual characters in a string directly by using a numeric value. But when using tuples you might have encountered "IndexError: tuple index out of range". 2016-08-01 • Python, C++ • Comments. 02:05 If you try to access an index that’s beyond the scope, you’ll get that index out of range … Out of the three 2 arguments are optional. let’s now check out a sample code to see how to use the comparison operators. #Pass the negative index in pop() method. The Solution. Please Sign up or sign in to vote. Solve IndexError: list index out of range in Python. For your code: The amount of values in your list "frontIndexes" is 8, but the index range is from 0-7. ciao, sono nuovo! As we went through before, negative index numbers of a string start at -1, and count down from there until we reach the beginning of the string. Our range() statement creates a list of numbers between the range of 3 and 6. The notation is list[start:stop:step]. This happens when you are trying to access an element that is out of bounds of the tuple. This is super-useful since it means you don’t have to programmatically find out the length of the iterable in order to work with elements at the end of it. So, for example, my_list[-2] is the penultimate element of my_list, which is much better than my_list[len(my_list)-2] or even *(++my_list.rbegin()). This means that our loop will try to access a bird at the index position 5 in our tuple because 5 is in our range. The elements of a tuple are accessed the same way a list element is accessed - by mentioning indices. Python range vs. xrange. If you need to pop the 4 th element, you need to pass 3 to the pop() method. List index in Python starts from 0, not 1. Welcome to the second part of lists. how to solve index was out of range. The string index out of range means that the index you are trying to access does not exist. Here we all can see that I’m trying to print the value of a list at an index ‘5’ but here in the array, we don’t have any element at that index. … In case the start index is not given, the index is considered as 0, and it will increment the value by 1 till the stop index. Note that negative indicies are also valid in Python. Index was out of range. In a string, that means you're trying to get a character from the string at a given point. In this chapter we learnt about some basic operations that can be performed on lists. Python also allows you to index from the end of the list using a negative number, where [-1] returns the last element. Strings are objects of Python's built-in class 'str'. If you’re looking for an index range, you’re probably meaning slicing which is an advanced technique to carve out a range of values from a Python list or string. Python range() is a built-in function available with Python from Python(3.x), and it gives a sequence of numbers based on the start and stop index given. ... (1000, 7000) The number 7000 is not in range (1000, 7000) # Python range doesn't include upper range value The number 6999 is in range (1000, 7000) Python xrange() to … [code]name= “Python” Print(name[-1]) Print(name[-2]) Output->n O [/code]Here your accessing individual element. In Python 3.x, we have only one range() function. ... list index out of range. numpy.negative() function is used when we want to compute the negative of array elements. If the index passed to the pop() method is not in the range, it throws IndexError: pop index out of range … Examples are given below. Index was out of range. This list is inclusive of 3 and exclusive of 6. Indexes in Python programming start at 0. It returns element-wise negative value of an array or negative value of a scalar. It´s important to know, that a index (of a list or array or whatever) always starts with 0, while count gives you the total amount of values and starts with 1. Python Access Tuple - To access tuple in Python, you can use index from 0 to length-1 or negative index from -length to -1, to access tuple from its first element to last element in that order. So in this case -6—which is also negative the length of your list—would return the first item. Tuples in Python are a series of objects that are immutable. This means that the maximum index for any string will always be length-1.Here that makes your numbers fail because the requested index is bigger than the length of the string. If no index is specified, a.pop() removes and returns the last item in the list. That final example uses one of C++’s reverse iterators. When accessing items in an array, list or other collection type, you use parenthesis with an index number inside. Here in this example, ... accessing Python range objet with its index First number in given range is: 0 fifth number in given range is: 4 range() vs xrange() Functions. Forum >> Programmazione Python >> Web e Reti >> IndexError: list index out of range Pagina: 1 Scritto da rururu Messaggi 1 Registrato il 2018-05-22 22:34:42: 2018-05-22 22:56:33 - IndexError: list index out of range. It does not print out a third value. Example: Input: list1 = [12, -7, 5, 64, -14] Output:-7, -14 Input: list2 = [12, 14, -95, 3] Output:-95 Example #1: Print all negative numbers from given list using for loop Iterate each element in the list using for loop and check if number is less than 0. So, if you need to pop 2nd element, you need to pass 1 to the pop() method. Python List can have duplicate elements. So, you now know that index tells … Continue reading "Python Lists – Negative Indexing, Slicing, Stepping, Comparing, Max and Min" Must be non-negative and less than the size of the collection parameter name:index. There is no array data structure in Python, Python array append, slice, search, sort. Given a list of numbers, write a Python program to print all negative numbers in given list. Our list is only indexed up to 4. Okay so I'm having trouble finding my problem I know the line of code that Idle is telling me that needs corrected is not the problem, but I can't seem to find what the problem is. -1 refers to the last index, -2 refers to the second last index and so on. You can use it for list, string and other operations. In Python, all indices are zero_based. Python array module can be used to create arrays for integers and floats. What is the range? I'm trying to make a program that can work out a mathematical ratio problem and then take the square root of the answer. must be non-negative and less than the size of the collection? Let’s see the following scenario where we pass the negative index to the pop() ... pop index out of range. List support two operators: + for concatenation and * … It is an immutable sequence of Unicode characters. Here we get a string index out of range, because we are asking for something that doesn't exist.In Python, a string is a single-dimensional array of characters. Python range() Function Built-in Functions. Negative Sequence Indices in Python. Parameter name: index. Negative index in python means you are accessing the list from the end. Return Value ... Index in Python starts from 0, not 1. We can unpack list elements to comma-separated variables. If the index passed to the method is not in range, it throws IndexError: pop index out of range exception. So continuing to work with that list a I’ll have you try out negative indexing. An index is an integer that identifies the location of an element in a array, list, or other collection type. Sequence data types are strings, list, tuple, range objects.
|
OPCFW_CODE
|
But why do I care? I am a developer, I live in the land of abstractions. The JVM is as low as I go my friends.
The problem is, all developers need to do releases.. And releases have a tendency to go very wrong..
So, I decided to educate myself. What is it about Docker that has got the cool kids on Hacker News all excited?
I took a look at some Youtube videos, tried out the tutorials and read a handful of blog posts and how-tos on Docker. I just couldn’t get my head around it! Finally I took the plunge and spent the last couple of weeks working through James Turnbull’s The Docker Book.
So, am I enlightened? The short answer is – yes, I have enjoyed working my way through the “Docker Book” and I have a much better idea on how to use Docker and the sort of use cases it is designed for.
The book is written in a tutorial format. We start with the basics about Docker and containers and move on to installing Docker on your favoured Linux(1) distribution.
Once we have Docker up and running, we learn about the basics of Docker. How containers can be created from images and how these images can layered. We learn about the Docker repository can be used to download standard images (for example, the image for ubuntu:14.04 can be used to build a base container that runs Ubuntu 14.04 LTS) and how to build containers from the images that we define. The author walks us through setting up and managing some simple containers.
All the Dockerfiles and any scripts and code used in the examples is readily available from the Github repository that the author has setup for the book(2).
I suspect most readers will get the most value out of chapters 6 and 7 of the book. Here the author goes through some examples including:
- Using Docker to build a test environment
- Building a continous integration pipeline using Jenkins and Docker
- Building a web application that is deployed on multiple containers
These examples are quite detailed and well designed. Most of them could be used as a basis for a Docker based application stack “in the real world”.
The book concludes with chapters on the Docker API and how Docker can be extended.
“The Docker Book” does not go into details on how containers work beyond the introductory chapters. The focus of the book is about learning what you can do with Docker and it succeeds admirably. I deducted half a star from the review simply because the author does not delve much into things like performance implications of using Docker or on how exactly the operating system may allocate resources to applications running in containers. There are plenty of resources online on these topics4.
You can’t go wrong with “The Docker Book” if you are looking for a hand-on introduction to Docker. James Turnbull is a good tutor and the resources accompanying the book are great.
Will Docker solve my release woes? Is it actually ready to be deployed in a corporate setting? Perhaps a topic for another post..
My Rating: 4.5 out of 5
- Instructions for use of Docker on Windows and MacOSX are provided but are skeletal. Basically you need to use Boot2Docker
- I worked through almost every single example from the Kindle edition and didn’t find a buggy script or typo!
- The eco-system is moving fast. Kubernetes from Google is also worth checking out.
- The Docker blog is excellent
|
OPCFW_CODE
|
Baking to give low poly model a high poly model detail
I have been looking at different tutorials and even other postings on Stack Overflow to try and figure out how to bake detail from a high-poly model into a low-poly model. I have provided screenshots of both models:
I have UV unwrapped my low-poly object. I have also created a new texture image, linked it to a normal map which is attached to the low poly-object. I have made sure before baking that both objects are shaded-smooth.
I have set my bake value in the following manner. The models are on top of each other when I try to bake, I just separated them in the above image to show the difference between the model detail.
When I bake, it looks like it creates a good normal however, the model does not seem to get the same level of detail (not smooth). Is there an issue in my setup? Or to get the smoothness of the high-poly model, do I need to do something additional?
File:
Did you try "recalculate normals" in the low poly mesh (Edit mode, select All, Shift N)?
@joshsanfelici , Yes, I have made sure all normals are facing the correct direction.
Hello, it's not clear what kind of details you're trying to bake, the high-poly has no details actually, also the low-poly has a shape that is quite different, and last thing, the baking Extrusion value is too high (0.5 units), not sure why you want such a high extrusion
@moonboots, What I'm trying to get is the smoothness of the front front and bend in the boards. I also don't understand why the board is concaving it the center.
Do you mean that you're trying to bake the low-poly onto the high-poly object?
@moonboots , no, I basically want the same degree of roundness that is on the front of the board on the hi-poly to be on the low-poly without using as many vertices. My understanding is that I can do this with baking/normal maps. Is this not correct?
not really, using a normal map will add 3D bumps on the low-poly but it can't change its general shape
@moonboots, is there a way to affect the shape of an object without increasing the poly count drastically?
Maybe tell a bit more about the object you are trying to achieve, etc?
So this is what we see when we open up the file:
First thing, of big note: the low poly shape doesn't agree very closely with the low poly, for some reason both objects are keyframed, and the low poly and the high poly intersect.
The target normal map is connected with the current output, which will create a bake loop. We're baking the difference between the two objects' normals. We don't want to connect any normal map until after the bake.
The normal map node is using a strength of 10.0, which is insane. This is a value where reasonable values lie between 0 and 1.
We're baking selected to active, and using a very large ray extrusion-- 0.5 meters of extrusion, when our object is only about 0.15 meters tall. And we have a max ray distance of 0.5, which is totally unnecessary.
We see some things that are good too. No overlap on the UV map, and there's only a single UV map, so no room for confusion there.
Let's fix the first three problems. We'll delete all keyframes on the objects (because keyframes can only cause us problems) and reposition it to more closely match. We'll disconnect our normal map. And even though it doesn't matter yet, we'll set the normal map strength to the base 1.0:
Now, let's look at the final problem. When we bake, our low poly will shoot out rays in a direction opposite the surface normal, and read the high poly at the first place that gets hit. Here, the low poly is inside the high poly someplaces, so the place that gets hit is sometimes going to be the wrong side of the high poly. That's why we use ray extrusion: to start our rays further out than they actually are. We don't usually want to use max ray distance at all.
It can be difficult to visualize how much ray extrusion we want. What I do, rather than use ray extrusion (I will set both it and max ray distance to 0) is just use a temporary displace modifier on my low poly, which does the exact same thing:
So that's a displacement of 0.5: that's where it's shooting rays from! And we can see that with that much extrusion, we've got overlap, self-intersection. Let's tune that extrusion to the bare minimum we need to fully enclose the high poly:
One tenth as much. Unfortunately, even with that, we have overlap in the low poly, because of the close loops. But we can't use any less, because of the strong disagreement at the nose. So let's edit the low poly to spread those loops a bit:
Better. Not ideal, since I didn't do anything to the nose, but this is still something we can work with. We'll bake now. Then, I'll disable the displacement modifier and re-connect the normal map and see what we get, side by side with the high poly using the same BSDF:
Just about a thousand times better! It's still not great, mostly at the nose, where we have that strong disagreement between the low poly and high poly.
|
STACK_EXCHANGE
|
Writing for all audiences
UX copy should be accessible by everyone, so consider all users’ abilities (physical, cognitive, and more) when you write. Refer to the Web Content Accessibility Guidelines (WCAG) for accessibility compliance information.
Here are a few best practices for making your UX copy more accessible:
- Avoid directional language, like “Use the button to the left." It doesn't account for users with different screen layouts (desktop vs. mobile) or users working with screen readers.
- Use proper heading levels (H1, H2) to articulate the page content’s organization.
- Do not identify items by color, like “Click the blue button.”
- Use plain language and short sentences.
- Use common contractions (for example, "it’s” and "you’re") in areas that sound most natural to you. Contractions are great for maintaining a casual voice and tone and for making your UX copy more accessible. UX copy should make a product interaction feel, look, and sound more human. Contractions help you get there.
- Be clear and concise.
Use the following guidelines when writing text that is only visible to a screen reader, like an aria-label that describes an icon button:
- Avoid redundancy. Screen readers will announce the name of the component or element as well as associated property and state when the HTML is defined correctly.
- Make sure labels for elements like buttons or links make sense when pulled out of the context. Use descriptive hyperlinks instead of raw links or vague linked text, like “Click here.”
- When you define an aria-label for an element that also has associated visible text in the UI, ensure the aria-label begins with the same text that appears in the UI to avoid confusion with screen reader users who have vision.
- Avoid unnecessary capitalization.
- Avoid leet speak that uses numbers or special characters in place of letters (like "a11y" for accessibility).
Localize your content to align with different regional and cultural expectations and keep your source content (copy and code) as clear and consistent as possible. Source content created with these goals in mind yields better translations, improves product quality, lowers costs, and increases reuse.
Keep these additional guidelines in mind when writing for global users:
- Avoid idioms like “cross your fingers.” They don’t make sense in all languages.
- Avoid vague terms like “stuff” or “kind of.” They can be translated incorrectly.
- Use humor sparingly. It generally doesn’t translate well.
- Avoid culture-specific or location-specific references and examples. They won’t resonate with all users.
- Avoid adding words to an image. They make translation harder.
- Translated text can be a drastically different length than the text you originally provide. Ensure the text you use can be 50% shorter or 50% longer with no negative impact on design.
View source on GitHub
|
OPCFW_CODE
|
[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]
Re: [school-discuss] ISO: Good programming language to teach an 8yrold
I'm just getting into Python myself, and liking it a *lot*. It's been a long time since I enjoyed a language so much.
1) Very accessible, as much as BASIC used to be; GUI IDEs available (one's included in the distro).
2) Very good power potential - hooks into the OS, window-making environments available (haven't gotten into this myself yet, though).
3) Lots of usage motivations - you can use it as a scripting language, some groups do scientific computing in it....
4) Encourages good programming style - indentation is *mandatory* :-) objects/classes are an integral part of the language - but you don't have to dive headlong into classes before doing anything interesting (which I feel is a pedagogical weakness of Java).
5) Runs pretty much the same on Linux, Windows. (Maybe Macs, I don't have one.)
So there's a recommendation, I guess.
Bill Kendrick wrote:
Back when I was 8 years old (about 20 years ago now!), I programmed in
BASIC on my Timex Sinclair 1000 and Atari 1200XL computers.
Today, I've been asked to tutor a very smart (but currently far
too Windows-saavy for his and his parents' own good) 8 year old kid.
He's got a Pentium (which doesn't work, and it sounds like Windows is
broken), and I'm thinking of installing Linux for him. (He's used it
and likes what he's seen.) I think he's ready to start picking up
programming, as it will provide him with a creative outlet for all that
computer geek energy he has.
What's a good, kid-friendly language for today's kids to use (on Linux,
I've though about picking up some Python and passing my knowledge on to him,
but of course I'd rather ask the educators and other experts on various
mailing lists for suggestions, since many of you have already dealt with
this problem before.
Thanks in advance!
email@example.com Check out the new, improved Tux Paint
-Robert Montante, Ph.D.
Department of Mathematics, Computer Science and Statistics
Bloomsburg, PA 17815 spam-resistant - "bobmon at acm dot org"
phone: 570-389-4624 emails - "bobmon at bloomu dot edu"
"Ooh, drat these computers!
They're so naughty and complex, I could *pinch* them!"
|
OPCFW_CODE
|
PRADO 3.2.3 is released!
November 26, 2013
The PRADO Team is proud to announce the formal release of PRADO 3.2.3.
Even being a patch release, it packs some important changes.
First, it's the first release since the move to github: the revision numbering scheme is different, using git's hashes instead of subversion's number; the download location has changed aswell.
Second, the long-awaited patch by javalizard adding even priorities, class and object behaviors, dynamic events and global events has been merged. To start experimenting it, have a look at TComponent's documentation.
Third, all the most recent bugfixes and improvements have been included, and a new THtmlArea4 component, based on TinyMCE4, has been added.
As usual we encourage you to report any problem with this release, we'll take care of them.
A last word on the work that's currently going on in the "jquery" branch on github. As the name suggests, a new PRADO version based on the jQuery js framework in the works, and it currently looks quite good even if it's not completed yet.
The PRADO Team
WSAT beta is released!
November 25, 2013
The PRADO Team is proud to announce the first beta release of WSAT.
Wsat is inspired in both Asp.Net - Web Site Administration Tool(WSAT) and Yii's Gii.
Wsat enables you to generate code saving your time in too many tedious tasks in a GUI fashion.
Current options available in this release:
1- Generate one or all Active Record Classes from your DataBase.
1.1- Automatically generate all relations between the AR Classes (new).
1.2- Automatically generate the __toString() magic method in a smart way (new).
To use it just add a service in your app configuration file as following:
<service id="wsat" class="System.Wsat.TWsatService" Password="my_secret_password" />
and then visit:
You can download a Prado version with jquery and the last Wsat release from github wsat branch.
All bug reports and recommendations are welcome!
The PRADO Team
We moved to GitHub!
September 14, 2013
We have just finished the migration of the Prado's code repository and issues from Google Code to Github.
The new project page on GitHub is available at https://github.com/pradosoft/prado
We are very happy with about this move; it was long overdue to enable proper integration with composer/packagist, and it will make life easier to developers and casual user on fixing bugs and tracking changesets.
The biggest change for user tracking day-to-day prado changes is the move from Subversion to Git. Don't hesitate to ask for support on the forum if any issue arises.
We also welcome our new team member Ciro Mattia, that will take care of the composer/github integration.
|
OPCFW_CODE
|
Doesn't show album art on songs not officially on apple music
Hence the title, it doesn't show the album art of a custom downloaded song (just shows the apple music logo)
And when I play a custom song by a certain artist it sometimes shows the album art of a different song they made instead, even tho I'm playing a mashup I made (Martin Garrix & MOTi's Virus and DJ EB & HoodSaded's Emotions)
Also how do I get the absolute cloned repo path or whatever to work (Keeps throwing "Error5 missing input/output")
Hence the title, it doesn't show the album art of a custom downloaded song
Actually the cover feature relies on the iTunes Store API, so it only works with album correctly tagged and available on Apple Music/iTunes.
Also how do I get the absolute cloned repo path or whatever to work
In the Finder you can right click on the folder apple-music-discord-rpc you cloned/downloaded, maintain alt and choose the option Copy apple-music-discord-rpc as Pathname or something like this.
Lemme try this https://www.reddit.com/r/MacOS/comments/kbko61/launchctl_broken/gpv2to1/
+ idk why it's showing the this, Promises of Tears is an unreleased song
And this is what it actually looks like
also when is the script suppose to automatically boot up? it says when you login but login where? and do i need to have terminal open etc, + the issues in the message above and my first message (regarding the first message i mean the issue where it shows album art of an artist even tho it's not that song)
idk why it's showing me this, Promises of Tears is an unreleased song
nextfire@MacBook-Pro-de-Nam ~> http 'https://itunes.apple.com/search?media=music&entity=album&limit=1&term=Avicii AVĪCI'
{
"resultCount": 1,
"results": [
{
"amgArtistId": 2097113,
"artistId": 298496035,
"artistName": "Avicii",
"artistViewUrl": "https://music.apple.com/us/artist/avicii/298496035?uo=4",
"artworkUrl100": "https://is2-ssl.mzstatic.com/image/thumb/Music125/v4/82/9b/58/829b585d-5c1b-4cca-98fd-a1b6c9929d1c/source/100x100bb.jpg",
"artworkUrl60": "https://is2-ssl.mzstatic.com/image/thumb/Music125/v4/82/9b/58/829b585d-5c1b-4cca-98fd-a1b6c9929d1c/source/60x60bb.jpg",
"collectionCensoredName": "AVĪCI (01) - EP",
"collectionExplicitness": "explicit",
"collectionId":<PHONE_NUMBER>,
"collectionName": "AVĪCI (01) - EP",
"collectionPrice": 4.99,
"collectionType": "Album",
"collectionViewUrl": "https://music.apple.com/us/album/av%C4%ABci-01-ep/1440899018?uo=4",
"contentAdvisoryRating": "Explicit",
"copyright": "℗ 2017 Avicii Music AB, under exclusive license to Universal Music AB",
"country": "USA",
"currency": "USD",
"primaryGenreName": "Dance",
"releaseDate": "2017-08-11T07:00:00Z",
"trackCount": 6,
"wrapperType": "collection"
}
]
}
Well as I said, it is a fuzzy search so it can fails pretty easily.
also when is the script suppose to automatically boot up? it says when you login but login where?
After setting the LaunchAgent .plist the script is automatically loaded when you log in your macOS session with no action needed.
Welp, this is incorrect
also i still have to quick start it cuz this didnt work
Can you show me your moe.yuru.music-rpc.plist?
Not from True, Pastis is from 2008
The plist seems correct, maybe recheck deno location, it is located at /usr/local/bin/deno on my Mac (installed with Homebrew too).
What appear when you launchctl unload ~/Library/LaunchAgents/moe.yuru.music-rpc.plist first then launchctl load ~/Library/LaunchAgents/moe.yuru.music-rpc.plist again? Also please send me the output of ls -l ~/Library/LaunchAgents/moe.yuru.music-rpc.plist.
Not from True, Pastis is from 2008
Well once again, I cannot really do anything with what the iTunes Search API returns aside adding an option in the script to disable it completely if it bother you too much.
it didnt give me an error when doing the unloud and load now
it didnt give me an error when doing the unloud and load now
Ok so the previous errors were probably just that the script was already loaded when you tried to launchctl load it I guess
so i dont need terminal open now?
Yes it should not be needed anymore (you can reboot to try).
thnx a lot, it worked
also you dont need to disable it, i just find it weird that it cant even find the correct album art for released songs
Alright then, I will close this issue for now if there nothing else to add
|
GITHUB_ARCHIVE
|
from django.http import HttpResponseRedirect
from django.contrib.auth.models import User
from .models import ApiInfo
from rest_framework import permissions, status
from rest_framework.decorators import api_view
from rest_framework.response import Response
from rest_framework.views import APIView
from .serializers import ApiSerializer, UserSerializer, UserSerializerWithToken
# POST body parsing.
import json
@api_view(['GET'])
def current_user(request):
"""
Determine the current user by their token, and return their data
"""
print('HERE')
serializer = UserSerializer(request.user)
print('+++++++++++++++')
print(serializer.data)
print('+++++++++++++++')
return Response(serializer.data)
@api_view(['POST'])
def add_api(request):
"""
Update a user's information based on their token.
"""
# Get the user.
print('U check')
print(UserSerializer(request.user).data)
user = UserSerializer(request.user).data['username']
# TODO: right way to do this?
# Get the user ID so that we can link across tables.
user_object = User.objects.get(username = user)
# Get the bulk information.
bulk = json.loads(request.body)
# Add the key for the user.
updated = ApiInfo(
local_username = user_object,
username = bulk['username'],
hostname = bulk['hostname'],
human_readable_hostname = bulk['human_readable_hostname'],
public_hostname = bulk['public_hostname'],
token = bulk['token'],
other_info = bulk['other_info']
)
updated.save()
print('========')
print(user)
print(updated)
print('=========')
return(Response(UserSerializer(request.user).data, status=status.HTTP_201_CREATED))
class UserList(APIView):
"""
Create a new user. It's called 'UserList' because normally we'd have a get
method here too, for retrieving a list of all User objects.
"""
permission_classes = (permissions.AllowAny,)
def post(self, request, format=None):
print('request.data: ')
print(request.data)
print('===============')
# Does this user already exist?
if User.objects.filter(username = request.data['username']).exists():
# Bad request because the user already exists.
return Response(status=status.HTTP_409_CONFLICT)
else:
serializer = UserSerializerWithToken(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
# The request didn't provide what we needed.
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
# (OPTIONAL) Special "one-off" view for an API writing to user
# because 1) we don't want a persisent user-writable account
# outside of the system, and 2) the API has no way of writing
# without the user's token.
# So, write to the table, then change the token.
# We could have gone with a temporary token here, but
# that may be too much too worry about.
|
STACK_EDU
|
up vote fifteen down vote It isn't really a "shortcut" (or limited-circuiting) operator in how that
When current at operate time, --unlawful-entry= requires a search phrase parameter to specify a mode of operation:
These fundamental and effective ideas will serve you effectively Down the road! We’ll use these Thoughts to permit for displaying markers otherwise. If you're feeling you’re presently cozy with Inheritance and Polymorphism, feel free to dive straight to the project (programming assignment) for this week. As you're employed with the project, Be at liberty to get some fun and introduce new amounts of class hierarchies for improved performance.
As it has captivated reduced-top quality or spam responses that needed to be removed, submitting an answer now calls for ten popularity on this site (the association reward will not depend).
In case the component's maintainers have presently released a fixed Variation that no more makes use of JDK-interior APIs then you can contemplate upgrading to that version.
Under Just before start, click , find Make Artifacts and choose the HelloWorld:jar artifact in the dialog that opens. The Construct 'HelloWorld:jar' artifact process is included in the Right before launch undertaking listing. So every time you execute this run configuration, the artifact will likely be crafted immediately.
We are working to Enhance the usability of our Site. To guidance this effort and hard work, you should browse around this site update your profile!
Disables qualifications compilation. By default, the JVM compiles the strategy for a background activity, functioning the method in interpreter method right up until the track record compilation is completed.
Permits printing of ergonomically picked JVM flags that appeared on the command line. It could be beneficial to grasp the ergonomic values established through the JVM, like the heap space dimension and the chosen garbage collector. By default, this option is disabled and flags aren’t printed.
Prior to I get into the small print of your respective future assignment, I’ll release my Resolution to the very first assignment that you should glimpse over and fully grasp.
Commencing inside the Sims 3, the players have the ability to set and determine the kind of quite a bit. The attribute returns from the Sims 4. Great deal assignment permits community a lot for being distinct from each other, with regards to the assignment they have.
The -enableassertions (-ea) choice applies to all course loaders and to method classes (which don’t have a category loader). There’s a single exception to this rule: If the choice is provided with no arguments, then it doesn’t apply to procedure lessons. This makes it very easy to permit assertions in all classes apart from technique courses. The -enablesystemassertions choice provides a independent switch to help assertions in all method lessons.
Since the method is loaded at runtime, compilers are not able To accomplish this. Just the runtime setting and JIT compiler know specifically try here which lessons are actually loaded, and so only they are able to make selections about when to inline, if the tactic is final.[five]
|
OPCFW_CODE
|
Draft proposals on the European Commission's "general data protection regulation" have been leaked online. The authenticity of the 116-page document (PDF) was corroborated by the EC, which The Register contacted on Wednesday afternoon. Version 56 of the draft proposal was issued by Brussels on 29 November this year, and …
Erm...whose social graph?
It looks like the EU think the data belong to the users not to the company running the network. They let Zuck use it for a while, but he would have to give it back if asked. It sounds like personally identifiable data is to become a new class of intellectual property, belonging to the identifiable person(s) rather than whoever collected the data.
Sounds like a good plan - I'd like a zipped archive containing all my photos with an XML file with all data I've posted, including internal links to the photos. Any other social network can slurp in that data when I join.
OK, that could be quite a lot of info, but it the size could be calculated when creating it and limited by dates to the last x months. Just so long as profile info and references to friends are included regardless.
Is there a meta-social network that allows you to store your data on any social site and connect to friends on any other? The equivalent of Trillian but for social networking?
Open office / Microsoft office
The leagal profession pretty much forced microsoft to hand out its file types and support open file types in order to prevent a monopoly. Looking at Facebook's market share it seems entirely appropriate that an open export/import feature be demanded.
a good idea...
facebook has a bunch of my life from a few years ago - when I last used it, it would be nice to be able to recover that in one fell swoop and cut the thing totally dead.
It will also be interesting to see that this does to the social network market, once users arnt tied to a given provider, it opens up a world of opportuniity for all the other socal sites...
Is this the beginings of a digital thumbprint type system...?
They really need to think this through first
While I think such legislation is a fantastic idea, and in good keeping with Sir Tim Berners-Lee's views on data silos and walled gardens, as a programmer myself I can see a problem that needs to be addressed if this is to work.
That is, we need an W3C-style standard for exporting or importing this data. Obviously some kind of XML is the way to go, but we need some standardised labelling system within that XML. For example, how do I, as a web developer, know which part of a data block exported from Facebook is the user's first name, last name, phone number, email, post content and so on?
For example, if say Facebook do it like this:
and Google+ do it like this:
then we're going to have the devil's own time trying to decipher cryptically named XML files from different social networks.
So there really needs to be a defined, global standard for describing social network data. The W3C of course is the best body to address this, but the issue does need to be addressed before any such laws are enacted.
Besides the mind boggling technical burdens this would impose on pretty much anyone with a server, when you get into social networks, you open a whole nother can of worms. For example, your social network includes me. Do you have the right to give my information? I realize that's stretching the draft text as listed, but it's the inference the article was making.
Indeed, this is stupid. I would never adhere to that legislation even if it becomes mandatory.
Why the special treatment on automated electronic data? If you're gonna apply such requirements. Why don't you do it to all data?
Also, if data needs to be 'manually' approved with a click before it is allowed into the database. Does that still constitute automated processing?
The internet needs to be free from these red tapes. Net neutrality goes beyond wires and telecoms. It means leave the f-ing net alone and legislate AROUND it.
Data is very much a company's revenue stream particularly for FB and Google and what these legislators are telling them is to tell them to share their revenue stream. Ridiculous.
Perhaps if these legislators work for free. They might be able to back up their own call.
What the net needs is enforceable legislation, we already have enough legislation to cover data protection. Each country's ICO should have the right to audit a company's use of data much like the HMRC on tax. THAT would be enforceable (at least more so). Creating even more restrictive legislation on top of others is counter productive.
- Product round-up Too 4K-ing expensive? Five full HD laptops for work and play
- Review We have a winner! Fresh Linux Mint 17.1 – hands down the best
- Vid Antarctic ice THICKER than first feared – penguin-bot boffins
- 'Regin': The 'New Stuxnet' spook-grade SOFTWARE WEAPON described
- You stupid BRICK! PCs running Avast AV can't handle Windows fixes
|
OPCFW_CODE
|
Inconsistencies in showing page TSconfig in Info module
TYPO3 backend => Info => Page TSconfig
1. Consistent naming in the select list (see #94322)
2. (?) Remove "Module: Web > Modules" - as a module "Modules" no longer exists (#94324
3. Same order in results for "all" as order in select list (if "sort alphabetically" is not checked)
4. Use the same naming scheme and order in the documentation: https://docs.typo3.org/m/typo3/reference-tsconfig/master/en-us/PageTsconfig/Mod.html#
Related: naming conventions for extensions #94325
- Module [mod]
- Module: Web > Page [mod.web_layout]
- Module: Web > View [mod.web_view]
- Rich text editor [RTE] *
to be done: find a nice descriptive representation for "TCEMAIN", "TCEFORM" ...
I am using the `[...]` style of putting the technical representation in brackets as that is already customary for e.g. page ids, page fields, fields in Flexform etc. in the Backend forms.
In the select list, there is a mixture of showing the objects in
1. their technical representation as they are used in TSConfig, e.g. "TCEMAIN"
2. in a full plaintext representation, e.g. "Module: Web>Page"
3. and a mixture , e.g. " "Module key (mod) with overruling user settings"
Should be done consistently, IMHO.
Also, the sorting in the results for "all" should then be the same sorting as in the select list.
What I find important is that you can look at the list and then find this info in the documentation (and vice versa), maybe even have the docs page open while you go through the list.
When I look at "Module: Web>Page" I find this confusing. This is "mod.web_layout". I think it is done to point out where this applies and this is good, but you need both information. Here, the technical representation is missing, which also makes it difficult to use the option.
You don't know all the options by heart and things are still a little muddled. You configured the Backend Layouts and now want to check in Info => Page TSConfig.
Where is it? You don't know. You can look at "all" and then find it (with CTRL+f) but then you still don't know where it is in the select list the next time you need it (unless you already clicked and "got" the relation "Web > Page = "web_layout").
If you do got directly to "Module: Web> Page" in the select list, you do not see the missing piece "web_layout" but you need it to define, e.g.
This can make it error prone and confusing.
|
OPCFW_CODE
|
Why You Should Join Code-impact Dev Community
Getting involved in a community is one of the greatest decisions I have made in my career path. To be honest, if there’s something I wish I knew in my first year of campus, it is a tech community and how to get involved. Well, today we shall talk about CIDEVCommunity.
I wrote my first line of code in my year1 at the campus and I wrote it in C Programming language. I didn’t know what I was doing and why I was even doing it, nothing made sense to me. As time went on, I developed a hatred for every programming course unit. I almost felt like giving up on my Computer Science course, but thank Goodness, I eventually got to know about communities and I joined. In my second year of campus, I learned about Pyladies Kampala Community and I joined them. They introduced me to Python programming and I was amazed because Python was much easier as compared to C. This was an eye-opener to my capabilities.
After campus, I saw an advert for a Coding Bootcamp which I gladly applied. I wanted to give it another shot and this time, I didn't look back. I attended OutboxEDU Coding Bootcamp and my passion for programming increased every day. During this boot camp, I met Mr Shyaka Alex Nkusi, who was one of my facilitators at the Bootcamp and he is the founder of the Code-impact Dev Community. Since then, he has been my Mentor till now.
“Community is much more than belonging to something; it is about doing something together that makes belonging matter” — Brian Solis
Why Code-impact DevCommunity?
Code-Impact Learning community is an online community aimed at empowering and nurturing young people that are passionate and excited about technology with technical, leadership and soft skills that will expose them to possibilities of becoming world-class software developers and technology leaders on the African continent and beyond. This is why you should join and get involved in CIDEV Community.
As a community, we decided to leverage the freecodecamp curriculum as we find it more engaging and well structured for the purpose of learning. We encourage our members to do all the freecodecamp challenges and projects so as to attain their certificates. As I write some of the members including me have already got their first freecodecamp certifications. checkout my certificate. Freecodecamp has an awesome community of developers that we can learn from. Through their blogs and youtube channel, we’re able to get solutions to a number of challenges we face in our learning process.
At Code -impact, we hold weekly online meetups where we interact with the community members. We have a skill-up session during this meetups and we always do a Kahoot challenge regarding the skillset for the day. Our members find it very interesting and interactive. The challenge encourages us to revise what we have been learning throughout the week.
Code-impact has an amazing team of facilitators led by Mr Shyaka Alex who has a passion for helping us become successful Software Engineers. All the facilitators are passionate about learning and sharing knowledge, as they say, the best way to learn is to teach that’s exactly what we do, we learn more as we teach our members.
This is one of the essential things at Code-impact because we do not only learn code but also learn soft skills, During these sessions, members learn Technical writing, communication skills, technical leadership skills and so on. Learners ask a lot of questions regarding the Software Engineering career aside from code.
As an online community, we have chat groups where we keep in touch with our members before we hold the next meetup. We share learning resources, inspirational content, in the chat groups so as to encourage our members to keep moving. We have a Discord server and a Whatsapp group.
Code-impact Dev Community has a blog that members can rely on, we share knowledge through this blog and our members find it very useful. We also write solutions to the code challenges they might be facing during their learning process.
As a tech community, we love to keep our members updated with the latest technologies. Every week, we send a newsletter to our members to wrap up the week and also share tech news with them. This enables us not only to keep up with the trend in tech but also keep our audience engaged.
Getting involved in the community is something I won't regret and I am proud to be part of Code-impact. If you aren’t already an active member of a community, I invite you to join the Code-impact Dev Community and develop a habit of giving back through the community. Share your knowledge and let others learn from you as you also learn from them.
|
OPCFW_CODE
|
Hey HashGuardians and future Guardians
It´s time to announce our first HashGuardian stake holders reward. The HashBox (Early Adopter edition).
About the HashBox
The Early Adopter HashBox will be our first reward for early adopters, HashGuardian holders and supporters around the world.
It will be the FIRST ANIMATED NFT from the HashGuardians design team. The HashBox will never be minted or sold again after all HashGuardian holders claimed their HashBoxes.
- Claiming your FREE HashBoxes will be closed on 30th june 00:00 CET.
If you want to claim a FREE HashBox or even more HashBoxes. You will need at least 10 HashGuardians in your Cardano wallet and verify this Cardano wallet in our HashGuardians discord server.
What’s inside the Early Adopter HashBox ?
There will be some very rare and legendary assets around the HashGuardians Universe that will only be minted for the early adopters.
and other cool stuff with different rarities for the upcoming HashGuardians universe.
Some lucky HashBox owners will get the chance to pull a special 1 of 1 NFT from our HashGuardians design team out of the HashBox.
For our early adopter HashBoxes we have planned something very special like we did in our SILVER TICKET TESTDROP.
We as a team are Cardano enthusiasts and want to share the love with other CNFT projects.
So we decided to include a Jackpot v raffle for all HashBox holders.
Every HashBox holder will get the chance to win a historic Cardano NFT like Spacebudz, Cardanobits, Cryptoknitties, Hoskinsons, Cardano Warriors etc.
That is our way to spread love with all the Cardano projects out there.
After all HashBoxes are claimed and distributed, we will prepare the raffle and reveal the historic Cardano NFTs that are available to win.
Our well known community commander AdAxVaDeR will do a big raffle party and set up the lucky wheel that will include all distributed HashBoxes with their unique mint number.
Like we did it in our previous raffles. It will be full transparent, full random and LIVE!
With a bit of luck one of the historic CNFTs will be yours.
Like we said before this is our way to reward the early supporter of the HashGuardians Universe.
Claim your free HashBox
We will create a discord verification room where you can verify your Cardano wallet.
Just enter your Cardano receive address from the wallet that contain your HashGuardians in to the HashBox verification room. The exact amount of HashBoxes will be sent out to the verified Cardano wallet soon after claiming is closed.
After claiming is closed we will close the verification room and send out the free HashBoxes. Everybody should receive their HashBoxes soon after claiming is closed.
How to open your HashBox
The grand opening of your HashBoxes will be possible on our new Homepage, please check out the roadmap.
We will prepare the raffle after all HashBoxes are distributed.
This is the first step of our reward system around the HashGuardians universe. It is the beginning of a new era in the CNFT history.
Thank´s for your support so far, together we are strong Guardians!
- The HashBox (early adopter edition) is a reward for HashGuardians early adopter and supporters. It´s an exclusive reward and will never be sold or minted again!
- To receive a FREE HASHBOX you will need at least 10 HashGuardians in your Cardano wallet and this wallet has to be verified in our Discord server until the 30th june.
- For example if you hold 10 HashGuardians you will receive 1 FREE HashBox. If you hold 100 HashGuardians you will receive 10 FREE HashBoxes….
|
OPCFW_CODE
|
Is it possible to make anonymous inner classes in Java static?
In Java, nested classes can be either static or not. If they are static, they do not contain a reference to the pointer of the containing instance (they are also not called inner classes anymore, they are called nested classes).
Forgetting to make an nested class static when it does not need that reference can lead to problems with garbage collection or escape analysis.
Is it possible to make an anonymous inner class static as well? Or does the compiler figure this out automatically (which it could, because there cannot be any subclasses)?
For example, if I make an anonymous comparator, I almost never need the reference to the outside:
Collections.sort(list, new Comparator<String>(){
int compare(String a, String b){
return a.toUpperCase().compareTo(b.toUpperCase());
}
}
What are the problems with "garbage collection or escape analysis" when forgetting to make an inner class static? I thought this is about performance only...
Your inner class instance keeps a reference to its outer instance alive, even if you do not need it. This could keep stuff from getting garbage-collected. Picture a (resource-heavy) factory object that creates lightweight instances of something. After the factory has done its work (e.g. during application startup), it could be disposed of, but that only works if the things it has created do not link back.
I know, this is only an example, but since it is a recurring one, it should be mentioned that Collections.sort(list, String.CASE_INSENSITIVE_ORDER) works since Java 2, read, since the Collection API exists…
No, you can't, and no, the compiler can't figure it out. This is why FindBugs always suggests changing anonymous inner classes to named static nested classes if they don't use their implicit this reference.
Edit: Tom Hawtin - tackline says that if the anonymous class is created in a static context (e.g. in the main method), the anonymous class is in fact static. But the JLS disagrees:
An anonymous class is never abstract (§<IP_ADDRESS>). An anonymous class is always an inner class (§8.1.3); it is never static (§8.1.1, §8.5.1). An anonymous class is always implicitly final (§<IP_ADDRESS>).
Roedy Green's Java Glossary says that the fact that anonymous classes are allowed in a static context is implementation-dependent:
If you want to baffle those maintaining your code, wags have discovered javac.exe will permit anonymous classes inside static init code and static methods, even though the language spec says than anonymous classes are never static. These anonymous classes, of course, have no access to the instance fields of the object. I don’t recommend doing this. The feature could be pulled at any time.
Edit 2: The JLS actually covers static contexts more explicitly in §15.9.2:
Let C be the class being instantiated, and let i be the instance being created. If C is an inner class then i may have an immediately enclosing instance. The immediately enclosing instance of i (§8.1.3) is determined as follows.
If C is an anonymous class, then:
If the class instance creation expression occurs in a static context (§8.1.3), then i has no immediately enclosing instance.
Otherwise, the immediately enclosing instance of i is this.
So an anonymous class in a static context is roughly equivalent to a static nested class in that it does not keep a reference to the enclosing class, even though it's technically not a static class.
+1 for FindBugs - every Java developer should have this in their build.
That is very unfortunate, because it means you may want to avoid this otherwise almost concise syntax for performance reasons.
JLS 3rd Ed deals with the case of inner classes in static contexts. They are not static in the JLS sense, but the are static in the sense given in the question.
"you may want to avoid this otherwise almost concise syntax for performance reasons". And that syntax is supposedly getting really concise in Java 8.
@Thilo: I haven't kept up with Java news in a while; what's the change in Java 8?
One day, you'll be able to write Collections.sort(list, (Integer x, Integer y) -> x - y ); (or something very similar). Still does the same thing as before (an anonymous Comparator implementation), but much less typing.
Here's an example of how it's implementation dependent: this code prints true using javac (sun-jdk-1.7.0_10) and false using Eclipse compiler.
and should a future version of Java allow to create a static anonymous class/lambda?
@MichaelMyers I have tryed to simulate the FindBugs alerting me of doing an Anonymous Inner without using the 'this' reference, and nothing happens. Can you demonstrate how FindBugs alert you as you said in the begining of your answer? Just paste some link or what ever.
You can't cite arbitrary blogs like 'Java Glossary'. They are not normative references, and that particular one is riddled with errors from top to bottom, which he simply refuses to correct. I've tried. 'The feature could be pulled at any time' is mere guesswork. Your JLS quotes make it perfectly clear that this is a supported feature. YoU should remove all that, and the Tackline stuff too. Other people's errors are of no interest.
@PaulBellora, "i has no immediately enclosing instance" is not implementation dependent. that's what matters in terms of memory leaks.
I think there's a bit of confusion in the nomenclature here, which admittedly is too silly and confusing.
Whatever you call them, these patterns (and a few variations with different visibility) are all possible, normal, legal Java:
public class MyClass {
class MyClassInside {
}
}
public class MyClass {
public static class MyClassInside {
}
}
public class MyClass {
public void method() {
JComponent jc = new JComponent() {
...
}
}
}
public class MyClass {
public static void myStaticMethod() {
JComponent jc = new JComponent() {
...
}
}
}
They are catered for in the language spec (if you're really bothered, see section <IP_ADDRESS> for the one inside the static method).
But this quote is just plain wrong:
javac.exe will permit anonymous
classes inside static init code and
static methods, even though the
language spec says than anonymous
classes are never static
I think the quoted author is confusing the static keyword with static context. (Admittedly, the JLS is also a bit confusing in this respect.)
Honestly, all of the patterns above are fine (whatever you call them "nested", "inner", "anonymous" whatever...). Really, nobody is going to suddenly remove this functionality in the next release of Java. Honestly!
"(Admittedly, the JLS is also a bit confusing in this respect.)" You got that right. It sounded strange to say that it depends on the implementation, but I don't recall having seen any obvious errors in the Java Glossary before. From now on, I take it with a grain of salt.
We're actually not talking about any of the patterns. We mean that the anonymous nested class is static. I.e. add a "static" between new and JComponent in your third example.
I added a clarification to the original question to show what is wanted.
@MichaelMyers, The dictation in JLS always needs to be interpreted.
Kind of. An anonymous inner class created in a static method will obviously be effectively static because there is no source for an outer this.
There are some technical differences between inner classes in static contexts and static nested classes. If you're interested, read the JLS 3rd Ed.
Actually, I take that back; the JLS disagrees. http://java.sun.com/docs/books/jls/third%5Fedition/html/expressions.html#15.9.5: "An anonymous class is always an inner class ; it is never static."
static in a different sense to that in the question.
I;ve added a little clarification.
Inner classes can't be static - a static nested class is not an inner class. The Java tutorial talks about it here.
I have updated the question with a reference to the official nomenclature.
anonymous inner classes are never static (they can't declare static methods or non final static fields),but if they're defined in a static context (static method or static field) they behave as static in the sense that they can't access non-static (i.e. instance) members of the enclosing class (like everything else from a static context)
On the note of making an anonymous inner class static by calling them within a static method.
This doesn't actually remove the reference. You can test this by trying to serialize the anonymous class and not making the enclosing class serializable.
-1: Creating an anonymous class within a static method actually does remove the reference to the outer class. You can test this by trying to serialize the anonymous class and not making the enclosing class serializable. (I just did.)
|
STACK_EXCHANGE
|
|Redding scored his first ever GP pole
Photo: MarcVDS Racing
Alex Rins dominates in Moto3 and Scott Redding snatches his first career pole in Moto2.
Moto3 kicked off the qualfying action at the Circuit of the Americas and Estrella Galicia rider Alex Rins took control of the session, slowly whittling down his time to take the second pole of his career. He is joined on the front row by the same riders who featured on the podium on Qatar. Nearest rival was fellow countryman Red Bull KTM rider Luis Salom and third was filled by another Spaniard Team Calvo’s Maverick Vinales, making for an all KTM front row.
Fastest of the Honda riders was Caretta Technology rider Jack Miller in fourth. Joining him on the second row is Aspar’s Jonas Folger and Alex Marquez, younger brother to Marc and team-mate to Rins.
Danny Webb impressed on his Suter Honda and scored an eighth place start for Ambrogio Racing. Fellow Brit John McPhee struggled after having his bike wrecked in his crash with Alexis Masbou in Qatar and qualified 28th.
After a massive wait as MotoGP qualified at COTA it was time for the ever close Moto2 class to duke it out for the top spot.
A last gasp effort by Scott Redding saw him score the first pole of his career and the first for his team, Marc VDS. The effort ousted Takaaki Nakagami from the pole spot for the second race in a row. In Qatar the Japanese Italtrans rider lost out to Pol Espargaro at the death.
Third was filled by the first Suter to hit the timesheets, the Aspar of Nico Terol who took his first front row in Moto2 with his best qualifying since moving up into the class. Tito Rabat was fourth and well ahead of his Tuenti HP 40 team-mate Pol Espargaro who made the best of a frustrating session to push to seventh late on.
British riders Kyle Smith and Danny Kent had their best Moto2 qualifying session to date with Kent starting from 13th for Tech3 and Smith 19th for the Italtrans outfit.
Moto3 Austin Qualifying Top Ten:
1.Alex Rins SPA Estrella Galicia 0,0 (KTM) 2m 16.396
2. Luis Salom SPA Red Bull KTM Ajo (KTM) 2m 16.879
3. Maverick Viñales SPA Team Calvo (KTM) 2m 17.100
4. Jack Miller AUS Caretta Technology – RTG (FTR Honda) 2m 18.303
5. Jonas Folger GER Mapfre Aspar Team Moto3 (Kalex KTM) 2m 18.570
6. Alex Marquez SPA Estrella Galicia 0,0 (KTM) 2m 18.595
7. Niklas Ajo FIN Avant Tecno (KTM) 2m 18.618
8. Danny Webb GBR Ambrogio Racing (Suter Honda) 2m 18.899
9. Alexis Masbou FRA Ongetta-Rivacold (FTR Honda) 2m 18.906
10. Arthur Sissis AUS Red Bull KTM Ajo (KTM) 2m 18.951
Moto2 Austin Qualifying Top Ten:
1. Scott Redding GBR Marc VDS Racing Team (Kalex) 2m 10.577
2. Takaaki Nakagami JPN Italtrans Racing Team (Kalex) 2m 11.266
3. Nicolas Terol SPA Mapfre Aspar Team Moto2 (Suter) 2m 11.287
4. Esteve Rabat SPA Tuenti HP 40 (Kalex)2m 11.383
5. Dominique Aegerter SWI Technomag carXpert (Suter) 2m 11.606
6. Simone CorsiITA NGM Mobile Racing (Speed Up) 2m 11.725
7. Pol Espargaro SPA Tuenti HP 40 (Kalex) 2m 11.838
8. Xavier Simeon BEL Desguaces La Torre (Kalex) 2m 11.859
9. Mika Kallio FIN Marc VDS Racing Team (Kalex) 2m 11.925
10. Julian Simon SPA Italtrans Racing Team (Kalex) 2m 12.001
|
OPCFW_CODE
|
When I realized the next major feature leap for the Oplop project was going
to require adding drag-and-drop to the Chrome extension
(to become a web app once Google's web app store launches), I realized that my
use of jQuery would not be enough to ease the development of complex UI
interactions. That meant
library such as Google Closure or YUI.
In the end I chose to go with switching to Google Closure. I had two reasons
teh functional approach jQuery promotes). Two, with me starting work for Google
employer uses so I can possibly use my 20% time to contribute to
It ended up taking me three days, but the work is done and has landed in
Oplop's code repository.
Along the way I learned a few things that might useful to others who decide to
take the plunge and learn how to use Closure.
Jumping around in the docs
First thing I learned is that the Closure Library API has a slight
disconnect between types and functions. If you look under the Type Index tab
of the API docs you get a list of all the types in a certain namespace. That's
fine, but to get a list of functions in a namespace that are not tied to a
type, you need to look under File Index. That's a counter-intuitive naming
scheme. And after having used Python's documentation for so long, I come to
expect to have both types and functions defined in the same namespace to be
listed in the same location. I mean if some types and some functions are
connected enough to be in the same namespace, why not list them in the same
place? This was an annoyance for me.
Use all of the tools
Closure is more than the Closure Library. There is also the Closure
also the Closure Compiler which not only minifies your code greatly, but
will also perform sanity checks on your code. And I didn't even use Closure
All of these tools somewhat feed into each other. For instance, the Closure
Linter makes sure that you have JSDoc type annotations. The
Closure Library lets you specify namespaces and what symbols should be exported
of this is used by the Closure Compiler to do type checking, proper
minification without bad symbol renaming, etc. By supporting one tool you end
up gaining benefits from the other tools, leading to one potentially caring
enough to use all of them and maximize their benefits.
I used all the tools for Oplop and I am glad I did. I caught a couple bad bits
Chrome extensions is still a rather manual process). I have proper
consistently formatted which makes it easier to read. I even got the
side-benefit of getting to minify the code easily which led to shaving 14K off
the zip file for the web app.
Supporting all of these aspects of Closure turned
makes type checking and such a useful thing (if automated testing of Chrome
extensions was easier I may be singing a different tune right now, but that
might be for another time where I try to tie Selenium or something into Chrome
page actions or something).
You are either in or you're out
Closure is an ecosystem. It is not like jQuery where you just use it
here and there to make accessing or manipulating the DOM easier. If you want to
truly use Closure, you have to make a commitment to truly use it. This comes
with some consequences.
Probably the biggest consequence is that if you use other third-party
Closure Tools. You can obviously skip doing this, but then you either have to
start special-casing third-party code (e.g., not running the Closure Linter
over it) or you lose certain benefits (e.g., the Closure Compiler not being
able to detect all possible type errors). So to truly gain all benefits you
might want to retrofit third-party code with at least JSDoc markup (you can do
this externally using extern files). Some of
Google's APIs have extern files already created
along with jQuery 1.3.
But probably the biggest shock out of all of this was having to shift to using
raw DOM objects. When one gets used to doing
it ends up feeling odd doing
var node = goog.dom.getElement('node'); node.style.backgroundColor = 'red';
The niceties that jQuery wraps DOM nodes do spoil you.
But there is obviously a performance penalty. Something to keep in mind with
Closure is that this is the code that Google uses, the company obsessed with
performance. If there is a way to expose a function that does something quickly
that is a pattern not easily exposed through the DOM consistently, then there
will be a function for it. But for something as consistent as changing CSS
values, then there is no such added support.
So was it worth three days of coding to switch? Having not started the coding
yet for the fancy things I want to add to Oplop that drove me to even
considering switching, it's too early to give a definitive answer. But what I
can say is that I am not sitting here regretting anything. I feel like my
follow better coding practices. And the benefits from the compiler thanks to
following those coding practices also make me feel better that my code is in
|
OPCFW_CODE
|
GNU Screen: Reverse focus tabbing
Is it possible to bind Ctrl+a Shift+Tab to switch focus to the previous window in the list (much like Ctrl+a p but changing the focus)?
Specifically, I want to skip from the first to the last. For example, here is a screen profile:
# Vertical split with two on right
#
# +---+---+
# | | |
# | |---|
# | | |
# +---+---+
#
split -v
screen bash
title "main"
focus
split
screen bash
title "script"
focus
screen bash
title "interpreter"
focus left
# ctrl+a navigating regions
bind j focus down
bind k focus up
bind h focus left
bind l focus right
Focus starts in the top left screen. I'd like to jump straight to the bottom right. I can do this by hitting Ctrl+a Tab twice. Is there a single command and, if so, how could I bind this to Ctrl+a Shift+Tab?
Edit
Have just found Ctrl+a Ctrl+i switches to the next window as well (strangely not info). I'm on Screen version 4.01.00devel (GNU) 2-May-06
Good workaround
Add the following to your screen profile (or ~/.screenrc)
## ctrl+a navigating regions
bindkey "^j" focus down
bindkey "^k" focus up
bindkey "^h" focus left
bindkey "^l" focus right
Now you can zip around quickly by holding Ctrl + h/j/k/l in a vim-esque style.
The command you're looking for is focus prev.
Unfortunately, there is no way to bind Shift+Tab, since bind does not allow for modifier keys, and Tab is equivalent to Ctrl+i (which also explains why it's bound to focus instead of info), so you'd have to bind it to a different key.
Here is my ~/.screenrc
bindkey "^k" focus prev
bindkey "^l" focus next
Need to activate if ~/.screenrc does not already exist: Ctrl+a, Shift+:, source ~/.screenrc
Then we can:
Ctrl+l to cycle to the next split screen
Ctrl+k for cycle to the previous split screen!
UPDATE:
Having used this screen config for a day, I find that I somehow messed up other keyboard shortcuts. For example, when I press "Esc" in Vim to come out of editing mode, the screen was focusing on the previous screen instead!
So I cancelled the shortcuts I created by coming out of all the split screens, removing the ~/.screenrc file and starting again.
To be honest, I can live with just using Crtl-a, Shift to cycle though split screens, given the consequences of messing up other shortcut keys!
focus prev and focus next can switch region no matter it's the last or first
Also you can use multiple focus in one bind, add below to .screenrc can replace focus left and right when use a 3x2 region layout
bindkey "^a^o" eval "focus next" "focus next"
bindkey "^a^y" eval "focus prev" "focus prev"
|
STACK_EXCHANGE
|
using AgilePoker.WebUI.Data;
using AgilePoker.WebUI.Models;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Linq;
namespace AgilePoker.WebUI.Tests.Data
{
[TestClass]
public class GameServiceTests
{
#region GetPlayer
[DataTestMethod]
[DataRow(123, "Developer 1")]
[DataRow(789, "Developer 2")]
[DataRow(321, "Guest")]
public void GetPlayer_ShouldFindThePlayerById(int id, string name)
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var foundPlayer = gameService.GetPlayer(id);
Assert.AreEqual(name, foundPlayer.Name);
}
[TestMethod]
public void GetPlayer_ShouldReturnNull_IfUserNotExists()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var foundPlayer = gameService.GetPlayer(123456);
Assert.AreEqual(null, foundPlayer);
}
[TestMethod]
public void GetPlayers_ShouldReturnEveryPlayer()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var players = gameService.GetPlayers();
Assert.AreEqual(3, players.Count);
Assert.AreEqual(123, players[0].PlayerId);
Assert.AreEqual(321, players[1].PlayerId);
Assert.AreEqual(789, players[2].PlayerId);
}
#endregion
#region Join game
[TestMethod]
public void JoindGame_AsDeveloper_ShouldJoinAnExistingGame()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var newPlayer = new PlayerModel(123456, "New");
var players = gameService.JoinGame(newPlayer);
Assert.AreEqual(4, players.Count);
Assert.AreEqual(123456, players[3].PlayerId);
}
[TestMethod]
public void JoindGame_AsGuest_ShouldJoinAnExistingGame()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var newPlayer = new PlayerModel(123456, "New Guest", true);
var players = gameService.JoinGame(newPlayer);
Assert.AreEqual(4, players.Count);
Assert.AreEqual(123456, players[3].PlayerId);
}
[TestMethod]
public void JoindGame_AsDeveloper_ShouldJoinAnEmptyGame()
{
IGameRepository gameRepository = CreateEmptyGame();
var gameService = new GameService(gameRepository);
var newPlayer = new PlayerModel(123456, "New");
var players = gameService.JoinGame(newPlayer);
Assert.AreEqual(1, players.Count);
Assert.AreEqual(123456, players[0].PlayerId);
}
[TestMethod]
public void JoindGame_AsGuest_ShouldJoinAnEmptyGame()
{
IGameRepository gameRepository = CreateEmptyGame();
var gameService = new GameService(gameRepository);
var newPlayer = new PlayerModel(123456, "New Guest", true);
var players = gameService.JoinGame(newPlayer);
Assert.AreEqual(1, players.Count);
Assert.AreEqual(123456, players[0].PlayerId);
}
[TestMethod]
public void JoindGame_ShouldNotJoin_IfHeAlreadyJoinedTheGame()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var newPlayer = new PlayerModel(123, "New Name");
var players = gameService.JoinGame(newPlayer);
Assert.AreEqual(3, players.Count);
Assert.AreEqual(123, players[0].PlayerId);
Assert.AreEqual("Developer 1", players[0].Name);
}
#endregion
#region Kick player
[TestMethod]
public void KickPlayer_ShouldKickAPlayer_IfThePlayerExistsInTheGame()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var newPlayer = new PlayerModel(123, "New");
var players = gameService.KickPlayer(newPlayer);
Assert.AreEqual(2, players.Count);
Assert.AreEqual(321, players[0].PlayerId);
Assert.AreEqual(789, players[1].PlayerId);
}
[TestMethod]
public void KickPlayer_ShouldKickAPlayerById_IfThePlayerExistsInTheGame()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var players = gameService.KickPlayer(123);
Assert.AreEqual(2, players.Count);
Assert.AreEqual(321, players[0].PlayerId);
Assert.AreEqual(789, players[1].PlayerId);
}
[TestMethod]
public void KickPlayer_ShouldDoNothing_IfThePlayerDidNotExistsInTheGame()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var players = gameService.KickPlayer(123456);
Assert.AreEqual(3, players.Count);
Assert.AreEqual(123, players[0].PlayerId);
Assert.AreEqual(321, players[1].PlayerId);
Assert.AreEqual(789, players[2].PlayerId);
}
[TestMethod]
public void KickPlayer_ShouldDoNothing_IfThePlayerIsNull()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var players = gameService.KickPlayer(null);
Assert.AreEqual(3, players.Count);
Assert.AreEqual(123, players[0].PlayerId);
Assert.AreEqual(321, players[1].PlayerId);
Assert.AreEqual(789, players[2].PlayerId);
}
#endregion
#region Restart game
[TestMethod]
public void RestartGame_ShouldUnselectEveryPlayersCard()
{
IGameRepository gameRepository = CreateDummyGame();
var gameService = new GameService(gameRepository);
var players = gameService.RestartGame();
Assert.AreEqual(3, players.Count);
foreach (var player in players)
{
Assert.IsFalse(player.SelectedCard.Selected);
}
}
[TestMethod]
public void KickPlayer_ShouldDoNothing_IfTheGameIsEmpty()
{
IGameRepository gameRepository = CreateEmptyGame();
var gameService = new GameService(gameRepository);
var players = gameService.RestartGame();
Assert.AreEqual(0, players.Count);
}
#endregion
private static IGameRepository CreateEmptyGame() => new GameRepositoryMock();
private static IGameRepository CreateDummyGame()
{
return new GameRepositoryMock()
.AddDeveloper(123, "Developer 1")
.AddGuest(321, "Guest")
.AddDeveloper(789, "Developer 2");
}
}
}
|
STACK_EDU
|
How does Canon automatically connect the PIXMA printer to the WLAN router
I just bought a Canon PIXMA TS6150. I downloaded an app on my windows 10 machine as instructed from ij.start.canon. Then some magic occurred and my printer was automatically connected to the WLAN without first connecting via USB or me having to type anything into the printer.
How did my laptop manage to communicate with the printer without the printer being first connected to the WLAN router or directly to the laptop via USB.
I'm also surprised that the app was able to grab the WLAN password from somewhere on the computer without first asking to open a key safe or something like that.
I'm going to take a guess. I think it turned my laptop into a temporary WLAN hotspot with a special SSID. It then configured the printer and then it switched the laptop back as a wlan client.
"How did my laptop manage to communicate with the printer without the printer being first connected to the WLAN router or directly to the laptop via USB?" Answer: "I downloaded [and installed] an app on my windows 10 machine as instructed from ij.start.canon." Can you be more specific as to your question?
ij.start.canon tells you to install an app which will connect the printer to the network. The printer is not connected via usb or bluetooth or any other means to the laptop. After you click through a few options something happens and then the printer is now connected to the same WLAN as the laptop. I was curious on what techniques the app could use be used to achieve this. My self comment above is a guess.
I’m assuming you followed direction similar to those found here: https://support.usa.canon.com/kb/index?page=content&id=ART167454
With that said, you left a lot of details out of your question about the process you followed which makes it fairly easy to determine what takes place.
First, you enable easy wireless connect on the printer.
Second, you complete the setup from the software on your computer. During that setup you are warned that, “the network connection is temporarily disabled during setup.”
Therefore, it is fairly clear that the printer is enabling its wireless interface and broadcasting a temporary wireless network that your laptop then connects to in order to finish configuring the printer.
Printers have had similar functionality for a long time, called wireless direct.
When you ran the setup program on your laptop, you were prompted to elevate to an administrative mode, when you chose to “allow the software to make changes to your computer.” That is all that is necessary to read any passwords of saved wireless networks from your computer.
I guess the laptop had wireless direct turned on out of the box. Thanks for the answer.
|
STACK_EXCHANGE
|
Problem connecting to local IP
I attempted to connect to a local IP in the form <IP_ADDRESS>:8000/botlog.json. And at first I got a long loading screen until it eventually loaded, and now the widget only displays zero values.
Earlier, I was receiving an immediate error about connecting to the address.
In addition, I am unable to click on the little settings cog (or if I am hitting it, nothing is popping up.)
I am using Android 7.0 unrooted, with the Google homescreen, and the precompiled release of the apk.
Hi Evan, thanks for reporting the issue. I couldn't reproduce the problem, it works without errors on my device even when connecting to an IP address in the local network.
The non-reacting settings button is probably just another symptom of the same issue.
Is your web browser on your device able to connect to the same path and does it display the botlog?
I will probably include an option to display an error log in the upcoming version to help me with such issues.
Yes, I tested it by copy-pasting the same IP to both the browser and the app. The botlog was correctly returned in the browser.
If it is relevant, I have a VPN app on my phone that I occasionally use but was disabled at the time.
Attached are relevant photos:
Log when checked through browser:
https://drive.google.com/open?id=0By47wOGzjnn_anFqMENyZGJ4Nzg
When attempting to connect to same IP as above via widget:
https://drive.google.com/open?id=0By47wOGzjnn_YjZWUkJ5VVd6Qlk
(Same result when not prefaced with http://)
I just uploaded the new version, which includes an error log. Could you update to the newest version and post the error log after trying to connect again?
You can find the error log in the menu at the top right corner of the configuration screen.
Sun Jun 04 17:04:23 MST 2017
com.google.gson.JsonSyntaxException: duplicate key: null
at com.google.gson.internal.bind.MapTypeAdapterFactory$Adapter.read(MapTypeAdapterFactory.java:190)
at com.google.gson.internal.bind.MapTypeAdapterFactory$Adapter.read(MapTypeAdapterFactory.java:145)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:129)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:220)
at com.google.gson.Gson.fromJson(Gson.java:887)
at com.google.gson.Gson.fromJson(Gson.java:852)
at com.artem.lendingwidget.network.LendingNetworkService.handleActionBotlog(LendingNetworkService.kt:130)
at com.artem.lendingwidget.network.LendingNetworkService.onHandleIntent(LendingNetworkService.kt:43)
at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:67)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.os.HandlerThread.run(HandlerThread.java:61)
This one should be more useful, get this after 30s+ of loading icon:
Sun Jun 04 17:07:17 MST 2017
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:334)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:196)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:178)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:356)
at java.net.Socket.connect(Socket.java:586)
at com.android.okhttp.internal.Platform.connectSocket(Platform.java:113)
at com.android.okhttp.Connection.connectSocket(Connection.java:1432)
at com.android.okhttp.Connection.connect(Connection.java:1390)
at com.android.okhttp.Connection.connectAndSetOwner(Connection.java:1667)
at com.android.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:133)
at com.android.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:466)
at com.android.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:371)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.execute(HttpURLConnectionImpl.java:503)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getResponse(HttpURLConnectionImpl.java:438)
at com.android.okhttp.internal.huc.HttpURLConnectionImpl.getInputStream(HttpURLConnectionImpl.java:247)
at java.net.URL.openStream(URL.java:1058)
at com.artem.lendingwidget.network.LendingNetworkService.handleActionBotlog(LendingNetworkService.kt:53)
at com.artem.lendingwidget.network.LendingNetworkService.onHandleIntent(LendingNetworkService.kt:43)
at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:67)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.os.HandlerThread.run(HandlerThread.java:61)
I tested it with a static, remote IP and got the same error.
The newest release fixes the issue.
The problem was that cryptocurrencies were implemented as enum constants. I haven't included all cryptocurrencies though, therefore the json converter mapped unknown currencies to null values, which caused problems. Cryptocurrencies are treated as strings now so this shouldn't cause any more problems.
Let me know if any other issues occur.
Very cool, thanks for the fix!
Are you in the Slack channel? I would like to discuss something with you.
Just joined the channel :)
|
GITHUB_ARCHIVE
|
using System.Linq;
using JsonApiDotNetCore.Exceptions;
using JsonApiDotNetCore.Internal;
using JsonApiDotNetCore.Internal.Contracts;
using JsonApiDotNetCore.Managers.Contracts;
using JsonApiDotNetCore.Models;
namespace JsonApiDotNetCore.Query
{
/// <summary>
/// Base class for query parameters.
/// </summary>
public abstract class QueryParameterService
{
protected readonly IResourceGraph _resourceGraph;
protected readonly ResourceContext _requestResource;
private readonly ResourceContext _mainRequestResource;
protected QueryParameterService() { }
protected QueryParameterService(IResourceGraph resourceGraph, ICurrentRequest currentRequest)
{
_mainRequestResource = currentRequest.GetRequestResource();
_resourceGraph = resourceGraph;
_requestResource = currentRequest.RequestRelationship != null
? resourceGraph.GetResourceContext(currentRequest.RequestRelationship.RightType)
: _mainRequestResource;
}
/// <summary>
/// Helper method for parsing query parameters into attributes
/// </summary>
protected AttrAttribute GetAttribute(string queryParameterName, string target, RelationshipAttribute relationship = null)
{
var attribute = relationship != null
? _resourceGraph.GetAttributes(relationship.RightType).FirstOrDefault(a => a.Is(target))
: _requestResource.Attributes.FirstOrDefault(attr => attr.Is(target));
if (attribute == null)
{
throw new InvalidQueryStringParameterException(queryParameterName,
"The attribute requested in query string does not exist.",
$"The attribute '{target}' does not exist on resource '{_requestResource.ResourceName}'.");
}
return attribute;
}
/// <summary>
/// Helper method for parsing query parameters into relationships attributes
/// </summary>
protected RelationshipAttribute GetRelationship(string queryParameterName, string propertyName)
{
if (propertyName == null) return null;
var relationship = _requestResource.Relationships.FirstOrDefault(r => r.Is(propertyName));
if (relationship == null)
{
throw new InvalidQueryStringParameterException(queryParameterName,
"The relationship requested in query string does not exist.",
$"The relationship '{propertyName}' does not exist on resource '{_requestResource.ResourceName}'.");
}
return relationship;
}
/// <summary>
/// Throw an exception if query parameters are requested that are unsupported on nested resource routes.
/// </summary>
protected void EnsureNoNestedResourceRoute(string parameterName)
{
if (_requestResource != _mainRequestResource)
{
throw new InvalidQueryStringParameterException(parameterName,
"The specified query string parameter is currently not supported on nested resource endpoints.",
$"Query string parameter '{parameterName}' is currently not supported on nested resource endpoints. (i.e. of the form '/article/1/author?parameterName=...')");
}
}
}
}
|
STACK_EDU
|
Daily/Weekly List of Events – Detailed (SAMPLE_EVENT_LIST_DETAILED.REP with REP_EVORD.FMX)
This is a line report that is extremely useful as a daily or weekly communication tool to all departments in the hotel about upcoming catering events, the location where they are being held i.e., the function space, number of attendees, set-up style, etc. It is somewhat more detailed than its sister report ‘Daily/Weekly List of Events – Simple'. In addition to the basic event details, this report also prints items from the booking level like the responsible managers, booking and catering status as well as event status. Event notes can optionally be printed, and the report indicates if there are sleeping rooms associated with the catering events or not. The report is sorted and grouped first by date and then within each date sorted by booking ID, followed by event start time.
Note: When printing this Report help topic, we recommend printing with Landscape page orientation.
Note: This report requires that the user be granted the EVENT PRINT permission in the selected property to run the report. Also, the Property LOV in this report is further limited by the properties to which the user has the appropriate access granted.
This is a customizable report.
Account. Select an account to print only events linked to that account.
Contact. Select a contact to print only events linked to the contact.
Business Block. Select a Business Block to print only events linked to that business block.
Note: When selecting a Master Block ID in the Bus.Block filter, Opera will also include the Sub Block IDs linked to that Master. The report will then print all Events linked to the Master and its Sub Blocks. If a Sub Block ID is selected, only Events linked to that Sub Block will be printed.
Date Range. Select the date range of the events that should be included in the report.
Event Status. Select the status of the events that should be included in the report.
Event Type. Select the event types of the events that should be included in the report.
Function Space. Select a function space to print only events booked in that space.
Event w/o Space Assignment only. If this flag is checked only events that fit the other filter criteria that do not have a function space attached to them will print. Also, when this flag is selected, the Function Space filter will become inaccessible along with any values selected in that filter.
Print Event Notes ?. If checked, event notes will print underneath the Doorcard field on the report.
Print Alternate Events. When checked, alternate events will be included in the report.
Report Type - Daily/Weekly. When DAILY is selected a page break per day will occur. On WEEKLY the report will print continuously for the span of the selected date range.
Copies. Determines the number of copies that will print, when the Print button is selected.
Preview. Use the preview option to view the generated output of this report in PDF format.
Print. Use the Print button to print the report to the selected output.
File. Prints the generated report output to an *.rtf file.
Close. Closes this report screen.
The report will print in booking ID order within each date. The primary company name, primary contact name, booking name, booking status, catering status, rooms owner, catering owner, booking ID will print on the booking group level as well as an indication if any sleeping rooms are also part of the booking and there are, how many.
Underneath the booking information the events attached to the booking for the appropriate date are listed in start time order. Information that is printed for each event includes event description, event ID, event status, event start and end time, doorcard, assigned function space, number of attendees and the set-up style for the function space.
If event notes are attached to any event and the Print Event Notes flag has been checked on the report filter form, these event notes will print underneath the Doorcard field of each event.
|
OPCFW_CODE
|
There are several design patterns used these days in the .NET ecosystem. What are they? What are the benefits and drawbacks of each pattern?...
How App Developers Should Prepare for iOS 7
By now, though, nearly everyone has embraced the inevitable. And many developers have had hands-on time with iOS 7, so the initial reaction to mere screenshots has been tempered by actual use.
So, with release imminent, it’s time to offer some suggestions to app developers awaiting iOS 7. But first, a quick debate is in order:
Should i update my app for ios 7?
Point: Apple carefully sets the stage for what is coming next. They introduced auto-layout, then the larger screen. When they double down on a technology, you can bet there’s good reason to pay attention. What if you simply made sure your app isn’t broken on iOS 7? You could get away with it—for about a year. Come iOS 8 though, Apple is going to build on things they introduced this year, leaving you with two years’ of advances to catch up on. And if you don’t update, you might be ceding the market to your hungrier competitors.
Counterpoint: It is challenging to build a business solely on apps. Your business case might not support updating existing apps for Apple’s latest software, and starting a new app may offer more opportunity. Your competitors are weighing the same tradeoffs, and your customers mainly care about something that solves their problems. Customers might not care if you’re late to the party, as long as your software keeps doing what they need.
How do I prepare for ios 7?
If you decide against iOS 7 support, I suppose your work is done. But that’s boring, so let’s assume you decide to support Apple’s latest release. What should you do for existing apps?
Figure out where your app stands. If you haven’t identified what issues your app will have in the upcoming release, you are already way behind. Get a device with the beta version of iOS 7 and start using it daily so you can get a feel for the new platform and find out where your app doesn’t fit in. Here’s a hint: Look at the colors of both text and standard navigation controls. Check any popup dialogs or overlays. Look carefully at your table views, including around the edges.
Get your app compatible with Apple’s upcoming software. You can’t ship until Apple does, but you won’t have much time when they announce the release. You should be working on a compatible build shortly, if you aren’t doing so already. In fact, we’re doing this for our own products over the next two weeks. I have good news: Simple compatibility shouldn’t require too much work on your part.
Identify if there are any big wins for your app from the new APIs and features. What counts as a win for your app? It’s a bit hard to explain, but it could be something that would make a compelling use case. A beautiful, elegant, innovative solution to a real problem. Something novel but familiar. Something that Apple might want to feature, and that your customers might delight in.This is where you decide how much you really want to invest in iOS 7. (Sidenote: I can’t wait until we can tell you about some of the great new tools and APIs!)
Understand that there will be some glitches along the way. Example: I’m not convinced that removing all indication of tappable regions (i.e., button edges) is best for users, but it may just demand more of us as developers. We may need to spend less time getting assets from our designers, and more time working with our designers to make sure that every element responds as the user expects it to. We may need to get more comfortable with animations and 3D transformations. We may need to push past our comfort levels and explore brave new ideas.
So get ready. It’ll be fun.
|
OPCFW_CODE
|
Can't install my own app from Play Store
Not sure whether it's related to the problem or not, but this affects only my own apps. When I try to install any of them, it downloads twice for some reason, then I get this message:
Can't install [APP] Try again, and if it still doesn't work, see common ways to fix this
Other people can install and use my apps with no problems.
Here's what I tried:
I used Titanium Uninstaller to completely uninstall my app after testing it via ADB (yes it works fine with ADB and bundletools)
I cleared google play's data and cache
I cleared any data/cache folder related to my app
I completely wiped system, data, cache and storage of the phone I use for testing and reinstalled the firmware, first thing I did afterwards is trying to install my app and I got the same problem.
Here's what I got from logcat: (com.shakibb.dlala is the package name for my app)
05-22 00:20:54.624 23836-23836/? I/Finsky: [1] com.google.android.finsky.download.DownloadBroadcastReceiver.b(1): Intent received at DownloadBroadcastReceiver
05-22 00:20:54.626 23836-23836/? I/Finsky: [1] hmr.a(59): com.shakibb.dlala: onProgress 1472622/1472622 Status: 200 URI: content://downloads/my_downloads/39.
05-22 00:20:54.634 17611-20434/? W/zipro: Error opening archive /data/data/com.android.providers.downloads/files/Dlala | الدلالة : Achat Vente en Algérie ����-5.bin: Invalid file
05-22 00:20:54.638 17611-25829/? W/zipro: Error opening archive /data/data/com.android.providers.downloads/files/Dlala | الدلالة : Achat Vente en Algérie ����-5.bin: Invalid file
05-22 00:20:54.658 23836-23836/? I/Finsky: [1] frp.a(11): Selecting account [aNaMyeNBhJG7ajSBD6GdsUVaT9A] for package com.shakibb.dlala. overriding=[true]
05-22 00:20:54.668 975-975/? W/StatusBar: removeFakeNotificationViews()---removeNotification for unknown pkg: com.android.providers.downloads
05-22 00:20:54.669 975-975/? I/PhoneStatusBar: updateNotificationCountChange,mLastHasNotification:true , hasActiveNotifications:true
05-22 00:20:54.691 799-1554/? I/libPerfService: perfSetFavorPid - pid:0, 0
05-22 00:20:54.707 23836-23836/? I/Finsky: [1] hmk.a(5): com.shakibb.dlala from 2 to 3.
05-22 00:20:54.715 23836-23836/? I/Finsky: [1] hmr.e(-1): com.shakibb.dlala: onComplete
05-22 00:20:54.715 23836-23836/? I/Finsky: [1] hmr.i(1): Download com.shakibb.dlala removed from DownloadQueue
05-22 00:20:54.717 288-288/? I/installd: free_cache(0) avail<PHONE_NUMBER>
05-22 00:20:54.717 799-1026/? I/NetworkIdentity: buildNetworkIdentity:
05-22 00:20:54.718 799-1026/? I/NetworkIdentity: networkId = DJAWEB_F8826
05-22 00:20:54.719 799-1510/? I/NetworkIdentity: buildNetworkIdentity:
05-22 00:20:54.719 799-1510/? I/NetworkIdentity: networkId = DJAWEB_F8826
05-22 00:20:54.721 23836-23836/? I/Finsky: [1] kct.a(100): Prepare to copy com.shakibb.dlala (adid: com.shakibb.dlala , isid: QnPaZv5KSVC13AUwZp9Qaw) from content://downloads/my_downloads/39 (expect 1821314 bytes, isCompressed: true)
05-22 00:20:54.721 23836-23836/? I/Finsky: [1] hvz.a(7): APK integrity will be verified using [SHA-256] method
05-22 00:20:54.728 975-975/? I/KeyguardUpdateMonitor: handleBatteryUpdate index=0 updateInteresting=false
05-22 00:20:54.729 1316-1762/? I/BatteryConsumeMonitor: init var lastPlugType : 2 lastBatteryChangedLevel : 87
05-22 00:20:54.732 799-820/? I/libPerfService: perfSetFavorPid - pid:23836, 5d1c
05-22 00:20:54.732 23836-23870/? W/Finsky: [410] ctj.a(1): Copy error (source-FileNotFoundException) for com.shakibb.dlala (com.shakibb.dlala): java.io.FileNotFoundException: No such file or directory
05-22 00:20:54.736 256-755/? E/Vold: Failed to find mounted volume for /storage/sdcard1/Android/data/com.android.vending/files/
05-22 00:20:54.736 256-755/? W/Vold: Returning OperationFailed - no handler for errno 0
05-22 00:20:54.737 23836-23836/? W/ContextImpl: Failed to ensure directory: /storage/sdcard1/Android/data/com.android.vending/files
05-22 00:20:54.737 23836-23836/? I/Finsky: [1] kct.b(174): Retry download of com.shakibb.dlala (adid: com.shakibb.dlala , isid: QnPaZv5KSVC13AUwZp9Qaw) (inhibit 1024)
05-22 00:20:54.741 23836-23836/? I/Finsky: [1] kct.a(318): Downloading full file for com.shakibb.dlala (com.shakibb.dlala)
05-22 00:20:54.742 23836-23836/? I/Finsky: [1] hmk.a(3): Duplicate state set for 'com.shakibb.dlala' (0). Already in that state
05-22 00:20:54.742 23836-23836/? I/Finsky: [1] hmr.a(38): Download com.shakibb.dlala added to DownloadQueue
05-22 00:20:54.744 23836-23836/? I/Finsky: [1] hmk.a(5): com.shakibb.dlala from 0 to 1.
My device is: Meizu M2 running flyme 6.2.2
|
STACK_EXCHANGE
|
Internet traffic is becoming more and more important since the company you are working for is focused on e-commerce. Every minute that their webservers running webshops are unavailable is causing profit loss. The company decided need a scalable solution and get rid of the single router (NewJersey), so there is no single point of failure anymore. Up to you to start configuring!
- All IP addresses have been preconfigured as following:NewYork: F0/0: 192.168.1.1 /24
NewYork: F0/1: 192.168.2.1 /24NewJersey: F0/0: 192.168.1.2 /24
NewJersey F0/1: 192.168.2.2 /24
L.A.: F0/0: 192.168.1.3 /24
L.A.: F0/1: 192.168.2.3 /24
HOST: F0/0: 192.168.1.200 /24
IPS: F0/0: 192.168.2.254 /24
- The ISP router has the following loopback interfaces, these are used to simulate the Internet.Loopback0: 172.16.1.1 /24
Loopback1: 172.16.2.1 /24
Loopback2: 172.16.3.1 /24
- The host router has been configured with “no ip routing” which will turn it into an ordinary host.
- OSPF has been configured on all routers except the host router for full connectivity.
- Configure NewYork, Newjersey and L.A. for VRRP, use the group number “1”.
- The virtual IP Address should be 192.168.1.254 /24 .
- Newjersey should be the master router, when it fails L.A. should take over.
- Hello packets should be sent every 7 seconds.
- Make sure the router with highest priority will always be the Master router.
- Configure authentication for VRRP, use password “vault”.
- When the HSRP active router’s F0/1 interface goes down, make sure it’s no longer the master VRRP router.
- Configure the virtual IP address of VRRP as default gateway on the Host Router.
- Ensure you can ping the loopbacks of the ISP router from the Host router.
- Ensure that whenever 2 out of 3 routers are down, the Host router still has connectivity to the ISP.
It took me 1000s of hours reading books and doing labs, making mistakes over and over again until I mastered all the routing protocols for CCNP.
Would you like to be a master of routing too? In a short time without having to read 900 page books or google the answers to your questions and browsing through forums?
I collected all my knowledge and created a single ebook for you that has everything you need to know to become a master of routing.
You will learn all the secrets about VRRP, gateway redundancy and more.
Does this sound interesting to you? Take a look here and let me show you how to Master CCNP ROUTE
Configuration FilesYou need to register to download the GNS3 topology file. (Registration is free!)
Once you are logged in you will find the configuration files right here.
|
OPCFW_CODE
|
textarea wont update to mysql database field PHP
I am trying to update the contents of the User_Bio field in my Mysql Database, the update query works fine up until I add that in. it must be something to do with the textarea. I'm aware that I'm open to SQL injection using this method, but I have to code this website once, then recode it with security improvements, so I will be adding protection from that at a later date.(the database column names match up perfectly)
$updateProfileQuery = "UPDATE users SET User_Firstname='" . $_POST['firstname'] . "', User_Surname='" . $_POST['surname'] . "', User_Bio='" . $_POST['bio'] . "' WHERE User_ID=" . $_SESSION['userid'];
$saveUpdate = mysqli_query($dbc, $updateProfileQuery);
<h1><small>Bio:</small></h1>
<textarea form="post" class="form-control" name="bio" id="bio">' . $row['User_Bio'] . '</textarea>
the form method is post and works fine up until the textarea is added into the SQL command.
textarea HTML element has a form property, but it is not designed to contain a method, but the name of the form containing the textarea. The method should be specified in your HTML form tag! http://www.w3schools.com/tags/tag_textarea.asp
sorry mis-read that, debug level one= print_r($_POST); check every thing is filled-in and labeled as expected
Please update with complete form. Looks like you have added text area a method property.
I appear to have fixed it, I added a form="form name" into the textarea tag, it didn't work straight away, but now it is, didn't change anything else though :/
that solution makes no sense, but if it works ....
I was wrong, it was a comma, in the text area entry, I'm guessing its causing the SQL to fail, fixed now with encoding to html before posting to database, thanks anyway! :)
WARNING: When using mysqli you should be using parameterized queries and bind_param to add user data to your query. DO NOT use string interpolation or concatenation to accomplish this because you will create severe SQL injection bugs. NEVER put $_POST data directly into a query. Your "solution" of encoding for HTML is completely, dangerously wrong. Placeholders will always escape your content correctly and you won't have insertion errors if you use them in a disciplined manner.
I'm aware of how at risk it is, but this website will never be hosted, its version 1.0 for a Uni assignment, our next assignment is build upon this website to protect and hopefully prevent our lecturer from implementing SQL injection, he specifically asked us not to for this assignment.
If you're not going to use placeholders, then you have to use mysqli_real_escape_string to escape the data, in case it contains quotes.
A comma in the text area shouldn't cause problems, but a quote would. Some people call them inverted commas, is that what you meant?
Sorry, yes, it was a single quote, I've been awake for over 24 hours now, my concept of reality is fading :D
@tadman outdated might be a better word than wrong. This method was pretty common once. It's still in tutorials out there in the wild. It's probably even in some books.
@TimOgilvy Using htmlspecialchars on database data is completely wrong, it doesn't even come close to serving the function of escaping for the SQL context. Any books that recommend that need to be thrown in the fire and burned. There's a reason the hall of shame is so big and a lack of proper escaping has claimed more than enough victims. Teaching people the dangerous way first is quite an irresponsible approach. It's like wearing safety glasses when using power tools. It's just common sense.
@tadman, not saying you are wrong, just that your wording is fairly confrontational. Shaming noobs is a bit harsh.
@TimOgilvy Sorry if it comes across that way, but a light shaming is a lot gentler than having your whole server hacked wide open because you made a simple mistake you didn't know was an issue.
|
STACK_EXCHANGE
|
Your app’s description is one of the biggest deciding factors for users to download your app or not. Your copywriting technique and keyword strategy will get users’ attention and increase your app downloads.
A concise description with the appropriate keyword density rate that is also creative can be tricky. To strike this balance, here are techniques for writing a great description that will increase your app’s downloads.
Why keywords will increase your app downloads
You can have a great app, an amazing logo design, and outstanding screenshots followed by a video produced by the best, but if there is a lot of competition around your “app type” or frequently-used keywords, you might not get much visibility. To increase your app’s visibility, try finding keywords niches (synonyms, for example) aside from the popular ones in your category. It is better to be in the top results of a less searched term than low in the results of a top-searched term.
These keyword best practices when executing your strategy will help you increase app downloads:
- Place the strongest keywords in the app name, as it holds the most weight in the search algorithm.
- Include keywords in your app developer name.
- Separate keywords with commas, not spaces, to maximize character count.
- Repeat keywords up to 5 times in the app description, there’s no additional impact when repeated more than 5X.
Note that in Google Play, the description functions like a SEO tool. In iOS however, the description has no impact on ASO as keywords are filled into a separate chart.
[TOOLS!] These sites provide assistance in selecting keywords to optimize your app description:
- SEO Book helps to calculate optimal keyword density
- Metrics Cat analyzes performance, monitors competitors, and suggests keywords
- WordStream generates keyword suggestions for up to 30 searches
- Sensor Tower will suggest keywords and perform analytics on them for your app
Why translating your description will increase visibility
Nearly 75% of consumers want to buy products in their native language. Since app stores serve consumers in more than 150 countries, appeal to more users to give your app extra exposure by localizing the description into multiple languages.
[TOOLS!] These resources will bring you one step closer to reaching a global market:
- Android and Apple provide step-by-step instructions for localizing your app
- Localize Direct provides popular languages to localize into across assorted genres
- Use Google Translate to quickly translate your app, or find a native translator on Fiverr for just $5
Why a call to action in the default fold benefits you
The default fold is the first couple of lines of your app’s description (generally the first 167 characters) that users can see without having to click to expand. Capitalize on these 167 characters to catch users’ attention, communicate value, and incentivize a user to download your app. Most users will decide whether to download or not just by reading this, so take good care of it to boost your app promotion.
Begin your default fold with the most exciting content. Has your app been featured somewhere, received excellent ratings and reviews, or gathered thousands of downloads? Increase your app downloads and credibility by letting users know! Take a look at this example:
★ ★ ★ ★ ★ Five amazing stars for its UX and level design! A really addictive game for every moment of the day – By Tappx developers community.
PS: Could be a review from anyone 😉
Why linking to social networks will increase app downloads results
Finally, your app’s description can include back-links promoting your website, social platforms, and other apps. This is always valuable, for showing a community of users around the app will improve your app ranking and at the same time can increase traffic to your other sites.
|
OPCFW_CODE
|
5 Tips for Being Better at Remote
COVID-19 was for many people, their first introduction to the working-from-home experience. I’ve been lucky enough to have been working from home exclusively since 2009 and at US-based companies (e.g. GitHub) since 2012. I’ve also been an EU-based open-source maintainer since 2007 and maintaining the globally-distributed Homebrew package manager since 2009. These experiences have helped me learn some tips on how to be more effective working remotely that I’d like to share with you:
⚙️ Evaluate performance on output, not on working hours: When everyone’s not colocated in the same place for meetings, pair programming, etc, you don’t need everyone to work the same schedules. As long as there’s enough overlap for some synchronous meetings, most work should be done asynchronously at the times and places that are most productive for that individual. 40 hours spent “working” unproductively can be worse for the business than 20 focused hours spent working at maximum efficiency. Introspect when and how you are most productive and encourage your teams to do so too.
✍️Write more, meet less: A one-hour meeting can technically be recorded but passing around a recording is rarely the best way of transferring information. Instead, consider writing documents which can be linked, shared, edited and kept evergreen for discussion. Keep synchronous meetings to a minimum in terms of regularity and attendance, and save them for tasks which cannot be done asynchronously or individually.
📧Slack and email are async, not sync, tools: Email and, increasingly, Slack are not well-loved tools because they are often ineffectively used as synchronous, interrupt-driven tools requiring an immediate response. This destroys focus, concentration and individual productivity. Don’t send messages saying “Have you got a minute?” unless it’s incredibly urgent that you schedule a sync call now. Instead, ask “Hey, about the meeting yesterday, can I grab your thoughts on the migration when you have a minute?” and expect it may take 24 hours to get a response. Getting Slack and work email off your phone will also help with work-life balance. If it’s critical you can be contactable 24/7: have a more focused process or tool for that purpose e.g. iMessage, WhatsApp, PagerDuty.
🤪 Use emoji to convey emotion: I have been told, particularly when my profile picture looked a bit grumpy, that I can “write like an arsehole”. People can read brevity (combined with a grumpy profile picture) and assume anger, irritation or disappointment when it’s unintended. Emojis can help with this. Regular use of 😂😍🎉 etc, can be used to explicitly convey your emotion and gratitude.
🛬 Meet in person (sometimes): Even if distributed around the globe, like GitHub and Homebrew are, it’s important to have as many people as possible meet in person once a year or more. Technology is wonderful but it’s hard to “get” people to the same extent when you aren’t able to read body language and other signals. Try to make the best use of this colocated time together to do things you can’t do normally e.g. synchronous brainstorming. Primarily focus on getting to know each other better and improving human relationships, particularly across traditional organisational boundaries (yes: sales and engineering can be friends!).
Hopefully, this has given you some ideas of how your distributed, remote, hybrid, or even colocated culture can operate a little more effectively.
Campfire is sponsored by Shepherd and Wedderburn's initiative to supercharge start-ups and scale-ups. Be sure to follow the Start to Scale LinkedIn page for useful videos and posts designed to help founders.
|
OPCFW_CODE
|
BINGO - Bullshit Bingo
Bullshit Bingo is a game to make lectures, seminars or meetings less boring. Every player has a card with 5 rows and 5 columns. Each of the 25 cells contains a word (the cell in the middle has always the word "BINGO" written in it). Whenever a player hears a word which is written on his card, he can mark it. The cell in the middle is already marked when the game starts. If a player has marked all the words in a row, a column or a diagonal, he stands up and shouts "BULLSHIT". After this, the game starts over again.
Sitting in a lecture, you observe that some students in the audience are playing Bullshit Bingo. You wonder what the average number of different words is until "BULLSHIT" is exclaimed. For the purpose of this problem, a word consists of letters of the English alphabet ('a' to 'z' or 'A' to 'Z'). Words are separated by characters other than letters (for example spaces, digits or punctuation). Do the comparison of words case-insensitively, i.e. "Bingo" is the same word as "bingo". When counting the number of different words, ignore the word BULLSHIT (indicating the end of the game), and consider only the words of the current game, i.e., if a word has already occurred in a previous game, you may still count it in the current game. If the last game is unfinished, ignore the words of that game.
The input file consists of the text of the lecture, with "BULLSHIT"
occurring occasionally. The first game starts with the first word in the input.
Each occurrence of "BULLSHIT" indicates the end of one game.
You may assume, that
- the word "BULLSHIT" occurs only in uppercase letters
- every word has at most 25 characters, and each line has at most 100 characters
- there are at most 500 different words before a game ends
- the players follow the rules, so there is no need to check if a game is valid or not
- at least one game is completed
The output consists of one number: the average number of different words needed to win a game. Write the number as a reduced fraction in the format shown below. Reduced fraction means that there should be no integer greater than 1 which divides both the numerator and denominator. For example if there were 10 games, and the number of different words in each game summed up to 55, print "11 / 2".
Programming languages can be classified BULLSHIT into following types: - imperative and BULLSHIT procedural languages - functional languages - logical BULLSHIT programming languages - object-oriented BULLSHIT languages
9 / 2
In the sample input, there are 4 completed games. The number of different words is 5, 5, 4 and 4, respectively.
Don't use Fractions module to reduce in Python. Also, no need to check for "gluedtoBULLSHIT" cases.
This game should be played in every classroom!
AC in 1 go with python
Very easy in python.
very nice game !
You read until EOF. Depending on the programming language you use, it is detected differently, so you may have to look it up.
How do I tell when all the data has been input? The input consists of multiple lines after all.
|
OPCFW_CODE
|
RMCA for Windows 95
containing a MS Windows 95 executable
of the RMCA
and some complementary programs
(random, moveout, crystal)
The package contains (once unzipped by Winzip) :
The fortran codes were translated into C codes by the f2c software
and then compiled by MSVC++ 2.0.
rmca.exe the executable (32 bits for Windows 95 and possibly
NT - not for 16 bits Win 3 or DOS).
cubig.dat a test file.
cubig.cfg the starting configuration for the test file (with
1000 atoms instead of the original 250).
cusq.dat the interference function for the test file (Cu).
random.exe the win95 executable for building a random initial
moveout.exe moves atoms in a configuration apart if they are
too close together.
crystal.exe produces a crystalline configuration.
coord.dat a test file for crystal in "automatic" mode.
The resulting executable (158Ko) is reasonably fast as can be judged
from the approximate times necessary for generating 100000 moves for the
test file (with 1000 Cu atoms) :
Machine Pentium 100Mhz Pentium II 266Mhz Pentium II 333Mhz
Ram 16Mo 64Mo 64Mo
Time 36mn 9mn 7mn20sec
For comparison with those PC machines, running the test file on
a DEC ALPHA AXP 4200 needs 18mn.
The PC RMCA program starts by a double click on its name in the Windows
Explorer (or typing 'rmca' in a DOS box opened in the directory containing
rmca.exe and the data files). A window is opened. You are prompted to give
the entry filename (here cubig, the test file) :
There are two differences with the original RMCA software. The .his
file is not created, and the timelim and timesav parameters have different
significations. Instead of being the time the program should run for, in
minutes, timelim is now the total number of moves generated before the
program will stop. Timesav is the interval number or generated moves at
which the results should be saved to the output file. Also the file containing
the experimental data (cusq.dat here) is given without the .dat extension
in the main data file (cubig.dat here).
Get the manual RMCA.ps and the Fortran source code at the RMCA
Running the test file, you should see at the end (the Chi**2 oscillates
around 5 after 15000 accepted moves) :
The resulting fit of S(Q) is the following :
How to use Random :
How to use Moveout (T=True, F=False) :
How to use Crystal in "automatic mode":
Copyright © October 1997
- Armel Le Bail
See also the GLASSVIR and NOCHAOS
related programs and an application to fluoride
|
OPCFW_CODE
|
MarkerCluster resets the sizes of the clustered markers
When clustering several markers, the size gets reseted to the default one.
[ x] I'm reporting a bug, not asking for help
[x ] I'm sure this is a Leaflet.MarkerCluster code issue, not an issue with my own code nor with the framework I'm using (Cordova, Ionic, Angular, React…)
[x ] I've searched through the issues to make sure it's not yet reported
How to reproduce
var icon = plateMark.options.icon;
var size = 64 * entry.kJ / 8000;
icon.options.iconSize = [size, size];
plateMark.setIcon(icon);
Leaflet version I'm using: 1.0.3
Leaflet.MarkerCluster version I'm using: 1.0.5
Browser (with version) I'm using: Firefox 53
OS/Platform (with version) I'm using: Linux Ubuntu GNOME 16.04
What behaviour I'm expecting and which behaviour I'm seeing
I expect the icons the stay the previously defined size.
Minimal example reproducing the issue
[ ] this example is as simple as possible
[ ] this example does not rely on any third party code
Using http://playground-leaflet.rhcloud.com/beg/1/edit?html,output or any other jsfiddle like site.
Please include your reproduction case in the jsfiddle so we can run it and understand it.
Instruction on how to actually reproduce it too please thanks!
Hi, here is a jsfiddle
http://playground-leaflet.rhcloud.com/kofeb/1/edit?html,js,output
Hi,
This is absolutely not specific to Leaflet.markercluster, and this might in a certain manner be a bug from Leaflet base, or more probably a matter of proper wording.
In your provided playground, you set icon from marker into marker2 ("marker2.setIcon(icon)"), so no doubt both markers will have the same icon scaled by 0.5 (as applied on icon from marker).
Once this is corrected, both icons are still getting the same modified size (but this time the final size is scaled by 0.5 * 0.9), and this is reproducible with a simple Layer Group instead of a MarkerClusterGroup. Therefore nothing to do with Leaflet.markercluster plugin.
By changing some options from icon option of marker, this actually already affects the icon option of marker2 as well (and from any other previously or later created marker). We can even check the equality of both icons (marker.options.icon === marker2.options.icon gives true).
This is the direct consequence of the L.Marker class default icon option, which is set as a new L.Icon.Default(). This actually means that all markers without a specified icon will use that exact instance of L.Icon.Default.
I think that is the intended behaviour, but I feel its description might be understood otherwise: "If not specified, a new L.Icon.Default is used" may imply that a new instance is created for each marker…
That being said, you have several very simple workarounds:
For each of your markers, at instantiation explicitly specify a new icon (possibly the default one: L.marker(latLng, { icon: new L.Icon.Default() }).
Updated playground: http://playground-leaflet.rhcloud.com/yimo/1/edit?html,output
Patch L.Marker class so that its behaviour matches its icon option description (the way I described it above):
Updated playground: http://playground-leaflet.rhcloud.com/hiju/1/edit?html,output
L.Marker.mergeOptions({
icon: null
});
L.Marker.addInitHook(function () {
if (!this.options.icon) {
this.options.icon = new L.Icon.Default();
}
});
Next time you might rather use Stack Overflow or GIS Stack Exchange first, to reach a wider audience and possibly get faster help.
thanks for your help, guess thats been a stupid typo using icon instead of icon2 - which happened by simplifying my code into a tiny example. nevertheless, thank you very much - I'll try to extract where exactly this happens. In my other code, I use a new icon for every marker, they all have different sizes, until they get clustered. I'll try to provide another playground.
|
GITHUB_ARCHIVE
|
I was recently working on an issue in the production environment. We have an application which is a combination of an SQL database and a .net application. So, most of the fixes we do will either be in the form of code or just a SQL script that will fix the metadata in the database. For the issue that we identified recently, we had to alter a SQL view to fix the issue. The tricky part here is that I have to execute the script on around 700 databases in a production environment. If it was a single database it shouldn't be a challenging task.
I initially thought of having the SQL view in a cursor script to loop through all the databases and execute it. But unfortunately, it is not possible to run DDL statements like that using a cursor script. Scripts with GO command cannot be used inside cursor. After researching a bit and looking for alternative solutions, I finalized that the Powershell script should be an easy way to execute the script looping through all the databases.
First I had to make sure if it was possible for me to use PowerShell script for this task as I was not sure what type of restrictions they would have implemented in the production environment. If there was no permission to run scripts from Powershell, it would have been a tough task. But luckily PowerShell had permissions to execute SQL commands. I wrote the below script to complete the task. I'm going to split the script into three parts and traverse you through the script.
Declaring the parameters for the connection string
In order to open a connection, we will need the following parameter values:
- DB Host Instance (This can be with port)
- Database Name
- Database User Name
- Database User Password
- Script Path
These are the values that are required to prepare a connection string. You can give these values based on your server details. Invocation path refers to the path from which the PowerShell script is executed. You can also give a direct path to your SQL script if in your case, the script is going to be in a different location.
Get a db connection
In this part of the code, we are going to establish a connection to the database with the connection string that we created in the previous code block. I'm using a try catch block to handle exceptions in the code. So, we create a dataSet and write a query to get all the databases from the server.
SELECT [name] FROM [$dbName].dbo.sysdatabases WHERE dbid > 4
In the above code, we can also use the master database directly instead to get all the databases. We have to give dbid as greater than 4 because the first 4 databases are always system databases. As a next step, we initiate a data adapter and pass the query to it. We then fill the values we get to the dataSet. At the end of this code, the dataSet will have the list of all the databases from the server.
Traverse through each database
This is the part where we actually execute the script. In this part of the code, we use foreach command to loop through the dataSet that we created and run the SQL script against each database one by one. Just to see which database is currently being performed, I have used an echo statement before and after the query execution.
invoke-expression "SQLCMD /S '$dbHostInstance' /U '$dbUsername' /P '$dbUserPassword' /l 60 /d '$($row)' /i '$ScriptPath' /v DB = '''$($row)'''" -ErrorVariable err2 -ErrorAction Stop
This part of the code then executes the provided SQL script on all the databases one by one. This is a little time-consuming process and if you have more databases in the server this can take a while. But in the log, you will be able to see which database is currently being processed.
That's all and it is as simple as that. If you like my content, upvote, follow, and share. You can also share your thoughts in the comments section.
Carbon was used to create Code Screenshots
If you find this article interesting, please vote, share and follow! Also, please share your thoughts in the comments section.
|
OPCFW_CODE
|
package ca.ualberta.cs.models;
import java.util.ArrayList;
/**
* Transforms a ElasticSearchOperationRequest to a ElasticSearchOperationResponse
* @author wyatt
*
*/
public class ElasticSearchOperationFactory {
public static ElasticSearchOperationResponse responseFromRequest(
ElasticSearchOperationRequest theRequest,
ElasticSearchResponse<?> serverResponse) {
// Get the response type
ElasticSearchOperationResponse response = new ElasticSearchOperationResponse(
theRequest.getRequestType());
TopicModel theTopicModel = theRequest.getTopicModel();
// Update the ID
String theId = serverResponse.getId();
theTopicModel.setId(theId);
// Update the version
int theVersion = serverResponse.getVersion();
theTopicModel.setVersion(theVersion);
response.setTopicModel(theTopicModel);
response.setPostModelList(theRequest.getPostModelList());
return response;
}
public static ElasticSearchOperationResponse responseFromRequest(
ElasticSearchOperationRequest theRequest,
ElasticSearchSearchResponse<TopicModel> esResponse) {
ArrayList<TopicModel> theTopicModelArrayList = new ArrayList<TopicModel>();
for (ElasticSearchResponse<TopicModel> r : esResponse.getHits()) {
TopicModel theTopic = r.getSource();
if (theTopic.getId() == null || theTopic.getId() == "null") {
theTopic.setId(r.getId());
}
theTopicModelArrayList.add(theTopic);
}
ElasticSearchOperationResponse response = new ElasticSearchOperationResponse(
theRequest.getRequestType());
response.setTheTopicModels(theTopicModelArrayList);
response.setPostModelList(theRequest.getPostModelList());
return response;
}
public static ElasticSearchOperationResponse responseFromCommentsRequest(
ElasticSearchOperationRequest theRequest,
ElasticSearchMgetResponse<TopicModel> esResponse) {
ArrayList<UpdatePackage<CommentModel>> initialRequest = theRequest
.getTheCommentIdsToGet();
ArrayList<ElasticSearchMgetDoc<TopicModel>> theDocs = new ArrayList<ElasticSearchMgetDoc<TopicModel>>();
int length = initialRequest.size();
for (ElasticSearchMgetDoc<TopicModel> theDoc : esResponse.getDocs()) {
theDocs.add(theDoc);
}
for (int i = 0; i < length; i++) {
TopicModel theModel = theDocs.get(i).getSource();
UpdatePackage<CommentModel> thePackage = initialRequest.get(i);
if (theModel != null) {
CommentModel theComment = theModel
.fetchCommentWithId(thePackage.getMyId());
thePackage.setTheUpdatedModel(theComment);
} else {
thePackage.setTheUpdatedModel(null);
}
}
ElasticSearchOperationResponse response = new ElasticSearchOperationResponse(
theRequest.getRequestType());
response.setTheFollowingCommentsList(theRequest
.getTheFollowingCommentsList());
response.setTheCommentIdsToGet(initialRequest);
return response;
}
public static ElasticSearchOperationResponse responseFromTopicsRequest(
ElasticSearchOperationRequest theRequest,
ElasticSearchMgetResponse<TopicModel> esResponse) {
ArrayList<UpdatePackage<TopicModel>> initialRequest = theRequest
.getTheTopicIdsToGet();
ArrayList<ElasticSearchMgetDoc<TopicModel>> theDocs = new ArrayList<ElasticSearchMgetDoc<TopicModel>>();
int length = initialRequest.size();
for (ElasticSearchMgetDoc<TopicModel> theDoc : esResponse.getDocs()) {
theDocs.add(theDoc);
}
for (int i = 0; i < length; i++) {
TopicModel theModel = theDocs.get(i).getSource();
UpdatePackage<TopicModel> thePackage = initialRequest.get(i);
if (theModel != null) {
thePackage.setTheUpdatedModel(theModel);
} else {
thePackage.setTheUpdatedModel(null);
}
}
ElasticSearchOperationResponse response = new ElasticSearchOperationResponse(
theRequest.getRequestType());
response.setTheFollowingTopicsList(theRequest
.getTheFollowingTopicsList());
response.setTheTopicIdsToGet(initialRequest);
return response;
}
}
|
STACK_EDU
|
The next two days, we are attending the Berlin Adaptve Multimedia Retrieval 2008 Workshop at the Heinrich Hertz Institute being located in downtown Berlin. So, it's pretty close to home and the only travelling involved was by S-Bahn :)
The first speaker is Francois Pachett from Sony CSL giving a keynote entitled "What are our audio features worth?"
The fundamental questions are "What makes objects what they are?", ""What are the features of subjectivity?", "How do we perceive objects and how can we transfer this to a machine?" Pachet's research is concerned with the classification of musical objects based on the so called polyphonic timbre that describes the sum of all features of a music object. Interesting thing is the identification of hubs, i.e. songs that are pretty close to every other song. Hubs in general seem to be mere artefacts of static models.
Interesting fact ist that there are companies now, predicting if your song is going to be a hit. Their judgement also relies on feature analysis and they even give recommendations how your song can be improvent to become a hit. Of course you have to pay for that service...but does it really work??
After the coffee break, there's a session on User-Adaptive Music Retrieval. The first talak is presented by Kay Wolter from Fraunhofer IDMT Ilmenau on "Adaptive User-Modelling for Content-Based Music Retrieval". They are adapting a content-based music retrieval system (CBMR) according to user preferences that are determined by acceptances and rejections of recommended songs by the user, which is furthermore used to improve the quality of music recommendations....Reminds me somehow to Pandora or last.fm...
The second talk is presented by Sebastian Stober from Otto-von-Guericke-Universität Magdeburg on "Towards User-Adaptive Structuring and Organization of Music Collections". So, wouldn't it be nice to structure your music collection automatically...but not in the way the software tells you, but the way you like it? The presented system is based on an general adaption approach using self-organizing maps that can be adapted by user interaction.
The first afternoon session is on "User-adaptive Web Retrieval" and starts with a presentation of Florian König from Johannes-Kepler-Universität Linz on "Using thematic ontologies for user- and group-based adaptive personalization in web searching". He introduces Prospector, which is a generic meta-search layer for Google, not constrained only to web search, based on re-ranking of search results and deploying user modells based on Open Directory Project (ODP) taxonomies. As far as I have understood, the applcation is based on the carrot2 framework for open source search engine result clustering.
Next, David Zellhöfer from BTU Cottbus presents on "A Poset Based Approach for Condition Weighting". Similarity search can be determined according to different conditions w.r.t. the search query. Esp. different people have different expectations if it comes to similarity. So, condition weights have to be determined by psychological experiments.
The second afternoon session is about "Music Tracking and Tumbnailing" and starts with a presentation of Tim Pohle from Johannes-Kepler-Universität Linz on "An Approach to Automatically Tracking Music Preference on Mobile Players". Ok, so the basic problem is, someday you will get bored by the music selection on your ipod. Therefore, the goal is to remove songs that you don't like anymore and replace them with new songs that you probably will like. How do you achieve this? Well, with according user feedback, i.e. by tracking the user's decision on choosing or skipping tracks. Tracks that have recently been skipped often will be dropped and replaced by tracks that are similar (according to some feature analyses) to the remaning tracks.
Next, Björn Schuller from Technische Universität Münschen is presenting on "One Day in Half an Hour: Music Thumbnailing Incorporating Harmony- and Rythm Structure". Music thumbnailing is some really cool feature, Just imagine, your sitting in your car and you are looking for another track to hear, but your player always starts songs at the beginning and they have long and boring intros. Therefore, getting to the most interesting (or significant) part of the song immediately would really be something...
The sessions close with an invited talk given by Stefan Weinzierl and Sascha Spors on "The Future of Audio Reproduction. Technology - Formats - Applications". Promissing title, let's see.... We start with a brief history of audio recording and reproduction technology starting from the very first phonograph to modern multichannel spatial surround sound systems. So, the future seems to be real sound field synthesis (wavefield synthesis, WFS) instead of relying on psycho-acustic effects as in today's stereo. Here, an array of loudspeakers reproduces exactly the wave front of the original sound source. For transmitting signals like this, no single channels are recorded anymore, but the original sound signal (without spatial characteristics of the room where it has been recorded, because this would interfere with the characteristics of the room, where it is reproduced) including movement and position of the sound source. Besides existing VRML and MPEG-4 Audio BIFS that focus more on visual scene description than on audio scene descriptions, there is the proposal of a new modeling language for high resolution spatial sound events called ASDF (Audio Scene Description Format).
[...to be continued in Adaptive Multimedia Retrieval 2008 in Berlin, June 26-27, 2008 - Day 02]
|
OPCFW_CODE
|
Error on create mysql tables?
Im trying to create a database in mysql, the first bit of my code works fine but then i get a syntax error on:
CREATE TABLE Project_Staff (
empID INT NOT NULL,
projID INT NOT NULL,
CONSTRAINT
i dont understand where is the error.
Here is my code:
CREATE TABLE Employees (
empID INT NOT NULL AUTO_INCREMENT,
empSurname VARCHAR(255) NOT NULL,
empLastname VARCHAR(255) NOT NULL,
empJobtitle VARCHAR(255) NOT NULL,
empLinemanager VARCHAR(255) NOT NULL,
CONSTRAINT pk_employees PRIMARY KEY (empID)
) ENGINE=InnoDB;
CREATE TABLE Skills (
sklID INT NOT NULL AUTO_INCREMENT,
sklName VARCHAR(255) NOT NULL,
CONSTRAINT pk_skills PRIMARY KEY (sklID)
) ENGINE = InnoDB;
CREATE TABLE Employees_Skills (
empskID INT NOT NULL AUTO_INCREMENT,
empskLevel INT NOT NULL,
sklID INT NOT NULL,
empID INT NOT NULL,
CONSTRAINT fk_employees_skills FOREIGN KEY (sklID) REFERENCES Skills(sklID),
CONSTRAINT fk_employees_skills_1 FOREIGN KEY (empID) REFERENCES Employees(empID),
CONSTRAINT pk_employees_skills PRIMARY KEY (empskID)
) ENGINE = InnoDB;
CREATE TABLE Project (
projID INT NOT NULL AUTO_INCREMENT,
projName VARCHAR(255) NOT NULL,
projDuration INT NOT NULL,
projStartdate VARCHAR (255) NOT NULL,
CONSTRAINT pk_project PRIMARY KEY (projID)
) ENGINE = InnoDB
CREATE TABLE Project_Staff (
empID INT NOT NULL,
projID INT NOT NULL,
CONSTRAINT fk_project_staff FOREIGN KEY (empID) REFERENCES Employees(empID),
CONSTRAINT fk_project_staff FOREIGN KEY (projID) REFERENCES Project(projID)
) ENGINE = InnoDB
CREATE TABLE Skill_For_Project (
sklreqDuration INT NOT NULL,
projID INT NOT NULL,
sklID INT NOT NULL,
CONSTRAINT fk_skill_for_project FOREIGN KEY (sklID) REFERENCES Skills(empID),
CONSTRAINT fk_skill_for_project FOREIGN KEY (projID) REFERENCES Project (projID)
) ENGINE = InnoDB
what error you getting
What's the exact error? What is line number 9?
you have Duplicate key name 'fk_project_staff':
Duplicate key name 'fk_skill_for_project':
mising empID in skills table. you may interested in Employees(empID) in Skill_For_Project table.
You have misses Semicolon at the end of Create Table Statement
here full working code
CREATE TABLE Employees (
empID INT NOT NULL AUTO_INCREMENT,
empSurname VARCHAR(255) NOT NULL,
empLastname VARCHAR(255) NOT NULL,
empJobtitle VARCHAR(255) NOT NULL,
empLinemanager VARCHAR(255) NOT NULL,
CONSTRAINT pk_employees PRIMARY KEY (empID)
) ENGINE=InnoDB;
CREATE TABLE Skills (
sklID INT NOT NULL AUTO_INCREMENT,
sklName VARCHAR(255) NOT NULL,
CONSTRAINT pk_skills PRIMARY KEY (sklID)
) ENGINE = InnoDB;
CREATE TABLE Employees_Skills (
empskID INT NOT NULL AUTO_INCREMENT,
empskLevel INT NOT NULL,
sklID INT NOT NULL,
empID INT NOT NULL,
CONSTRAINT fk_employees_skills FOREIGN KEY (sklID) REFERENCES Skills(sklID),
CONSTRAINT fk_employees_skills_1 FOREIGN KEY (empID) REFERENCES Employees(empID),
CONSTRAINT pk_employees_skills PRIMARY KEY (empskID)
) ENGINE = InnoDB;
CREATE TABLE Project (
projID INT NOT NULL AUTO_INCREMENT,
projName VARCHAR(255) NOT NULL,
projDuration INT NOT NULL,
projStartdate VARCHAR (255) NOT NULL,
CONSTRAINT pk_project PRIMARY KEY (projID)
) ENGINE = InnoDB;
CREATE TABLE Project_Staff (
empID INT NOT NULL,
projID INT NOT NULL,
CONSTRAINT fk_project_staff FOREIGN KEY (empID) REFERENCES Employees(empID),
CONSTRAINT fk_project_staff2 FOREIGN KEY (projID) REFERENCES Project(projID)
) ENGINE = InnoDB;
CREATE TABLE Skill_For_Project (
sklreqDuration INT NOT NULL,
projID INT NOT NULL,
sklID INT NOT NULL,
CONSTRAINT fk_skill_for_project FOREIGN KEY (sklID) REFERENCES Employees(empID),
CONSTRAINT fk_skill_for_project2 FOREIGN KEY (projID) REFERENCES Project(projID)
) ENGINE = InnoDB;
http://sqlfiddle.com/#!2/d75182
As far I see, there are two issues
You missed a semicolon after create project table
CREATE TABLE Project
( projID INT NOT NULL AUTO_INCREMENT,
CONSTRAINT pk_project
PRIMARY KEY (projID) ) ENGINE = InnoDB; <-- Here
Both your constraint names (like fk_project_staff in Project_Staff table and fk_skill_for_project in Skill_For_Project table) are same; Try giving them a different name
|
STACK_EXCHANGE
|
DNS and DNS attacks
DNS is one of the most used protocols on the Internet, and you have probably heard a lot about DNS attacks on the Internet. In this series, I will explain more about the DNS attack types, and the reasons behind using them.
The DNS Protocol
Domain Name Server, or DNS for short, is a protocol that is mainly focused on translating the so-called human format name of a site (the domain name), into the Internet address (IP address), and is often referred to as the Internet phonebook. For example, when you want to go to www.radware.com using a browser, your browser will automatically perform a DNS request to its DNS server to translate www.radware.com into its IP address – 184.108.40.206. The browser will then use this IP address to get the content from www.radware.com. Each enterprise or ISP has its own DNS server that serves its users. The DNS server is automatically configured into any connected device so it can perform DNS queries, usually using DHCP. Public DNS servers are also available, such as Google’s famous 220.127.116.11 DNS server or openDNS (recently acquired by Cisco), which also provide many services on top of the simple DNS response.
DNS is one of the Internet’s foundations and was originally published in 1983 in RCFs 882 and 883, which were later replaced by RFC 1034 and 1035 in 1987. The DNS protocol seems like a simple protocol, but in fact the DNS’s infrastructure and variants can get complicated. Over the years, some newer extensions were added to the DNS protocol such as DNS over TCP and DNSSEC, which enhanced its capabilities and security while making it more complex. Many companies built their name and reputation on DNS, the most famous being Verisign, which owns some top-level DNS servers. DNS keeps attracting new startups that use DNS for network viability and security such as ThousandEyes and infoblox.
DNS attacks –
The wide usage of DNS on the Internet also led to a wide usage of DNS as an attack vector. DNS attacks are very common; once in a while a new vector is found and gains popularity over another vector, yet the DNS-related attacks always have a place of honor in the hall of fame.
The DNS attacks can be divided into several groups:
- Reflection attacks: This type of attack is used to attack a 3rd party victim, even if he does not run a DNS server. This attack vector is one of the most common vectors in the DDoS world. Its popularity came from the fact that it is completely spoofed (it’s very hard to identify the attacker), and it can amplify the attack bandwidth in a way that allows a few original packets to cause saturation of a large Internet pipe. These attacks will be the subject of my next article in this DNS series.
- Server attacks: These attacks are aimed at specific DNS servers and can have several objectives, the most common of which is to cause denial-of-service. Another common objective is to obtain all the data stored in a DNS server, in order to study the organization’s network infrastructure. Such study is later used to find effective attack vectors. Yet another attack objective is to get control over the server and server’s data using protocol vulnerabilities and anomalies. While both authoritative and recursive DNS servers are victims to such attacks, different attacks are used with each of them, leveraging the different mode of operation of each to maximize the attack impact. ISPs, hosting providers and any other company that hosts a public DNS server often suffer from such attacks.
- Spoofing results: These attacks aim to change a DNS valid response into a malicious response. While the attack is launched on the DNS server itself, the attack is actually focused on the DNS server’s users. The goal is to trick the user to go to a malicious site instead of a known legitimate site. The technique is mostly used as part of a phishing attack on personal-data or financial-related site. Once the DNS server is tricked into responding with the wrong data, it is very hard to detect the attack. This is because the person using the site is doing everything right, and on first look everything seems legitimate, while a man-in-the-middle attack is actually taking place in the background.
- DNS tunnels: This technique is not an attack per-se, rather it is a way to use DNS’s infrastructure and protocol to pass data under the radar. The technique is using the DNS protocol as a tunnel, while actually sending the data inside the DNS requests and responses. This technique is used to bypass corporate firewalls, Wi-Fi monetization mechanisms, Data-Loss-Prevention systems and any other technology used to inspect or limit data over the wire. Malwares often use this technique to pass data and communicate with the outside world, in order to avoid the organization’s security infrastructure.
All of the above attacks are widely used and can have a lot of impact, and combining all of them together explains the popularity of DNS attacks on the Internet today. The DNS-based attacks are a constant threat, and any security professional should get familiar with them, as well as pose a plan of how to fight them in case his infrastructure is being attacked.
Read “Creating a Safe Environment for Under-Protected APIs” to learn more.
|
OPCFW_CODE
|
Apr 17, 2023
What should you learn in this course
Random Number Generator
+ : addition
- : subtraction
/ : division
* : multiplication
% : modulus
- The sign bit (the 31st bit): positive is 0, negative is 1.
- The nex 8 bits: the exponent value.
- The remaining 23 bits represent the fraction value.
This results in the following:
- 자바스크립트의 숫자 시스템은 다른 언어와 달리 32-bit floating-point representation을 사용하고 있다.
- 기본적으로 숫자는 컴퓨터에 저장될 때 2진수로 저장된다. 그러나, 숫자 10이 2진수로 저장될 때 1001로 저장되는 것과 달리, 소수점은 깔끔하게 떨어지지 않는다. 따라서 아래와 같은 결과가 나온다.
To really understand why 0.1 cannot be represented properly as a 32-bit floating- point number, you must understand binary. Representing many decimals in binary requires an infinite number of digits. This because binary numbers are represented by where n is an integer.
What is integer division?
3/2 = 1, 항상 정수만 리턴하는 나눗셈. round down 값을 반환
Integer division in programming languages like Java simply evaluates division expressions to their quotient (몫).
5/4 is 1 in Java because the quotient is
5/4 = 1.25
This is because Java requires you to explicitly (명시적으로, 확실하게) type the integer as an integer.
Math.round - rounds to nearest integer
Math.floor - rounds down to nearest integer
Math.ceil - rounds up to nearest integer
Number.EPSILON returns the smallest interval between two representable numbers. This is useful for the problem with floating-point approximation.
This function works by checking whether the difference between the two numbers are smaller than
Remember that Number.EPSILON is the smallest difference between two representable numbers. The difference between 0.1+0.2 and 0.3 will be smaller than Number.EPSILON.
⇒ For example, the smallest number among the bigger number in 0.1 will be 0.00000000001…. something. However, computer can recognize that small number so they decide epsilon which is the smallest number of JS to compare the size of numbers. Therefore, the smallest number among the bigger number in 0.1 will be
0.1 + EPSILON
It returns the largest integer.
what is integer?
정수. 정수는 양의정수, 음의정수, 그리고 0이 있다. (분수, 소수 등 포함 x)
- The value of the largest integer n such that n and n + 1 are both exactly representable as a Number value. Therefore, it does not work for floating-point decimals.
It returns the largest floating-point number possible. Equal to approximately 1.79E+308.
Therefore, this uses double-precision floating-point representation and works for floating points as well.
It returns the smallest integer. It is equal to -9007199254740991.
Number.MIN_VALUE returns the smallest floating-point number possible.
Number.MIN_VALUE is equal to 5e-324. This is not a negative number since it is the smallest floating-point number possible and means that Number.MIN_VALUE is actually bigger than Number.MIN_- SAFE_INTEGER.
Number.MIN_VALUE is also the closest floating point to zero.
The only thing greater than Number.MAX_VALUE is Infinity, and the only thing smaller than Number.MAX_SAFE_INTEGER is -Infinity.
Infinity < Number.MIN_SAFE_INTEGER < Number.MIN_VALUE < 0 < Number.MAX_
SAFE_IN- TEGER < Number.MAX_VALUE < Infinity
A primality test can be done by iterating from 2 to n, checking whether modulus division (remainder) is equal to zero.
What is prime number?
The time complexity is O(n) because this algorithm checks all numbers from 0 to n.
- Think about how this method iterates through 2 to n. Is it possible to find a pattern and make the algorithm faster? First, any multiple of 2s can be ignored, but there is more optimization possible. This is difficult to notice, but all primes are of the form
6k ± 1, with the exception of 2 and 3 where k is some integer. Here’s an example:
- Also realize that for testing the prime number n, the loop only has to test until the
square root of
n. This is because if the square root of n is not a prime number, n is not a prime number by mathematical definition.
This improved solution cuts the time complexity down significantly.
Prime numbers are the basis of encryption (covered in Chapter 4) and hashing (covered in Chapter 11), and prime factorization is the process of determining which prime numbers multiply to a given number. Given 10, it would print 5 and 2.
This algorithm works by printing any number that is divisible by i without a remainder. In the case that a prime number is passed into this function, it would be handled by printing whether n is greater than 2.
It returns a float between 0 and 1.
Simply multiply Math.random() by the range. Add or subtract from it to set the base.
Simply use Math.floor(), Math.round(), or Math.ceil() to round to an integer.
|
OPCFW_CODE
|
OK, now that Chapter 24 has been posted, I can address the issues brought up in the chapter. While the chapter is a nice diversion--since it's a return to the always popular 'medical subplot' and veers away from the day-to-day interactions--a big part of it was on some more minor issues.
First of all, Alex doesn't really accomplish much here. That was meant to show that the types of problems the military is wrestling with aren't easily solved. They're dealing with messy situations as best they can, and it's difficult for someone like Alex to just swoop in and rescue anyone. Instead, everyone focus was on what Alex was doing could tell them about their own approaches (not always an easy message to sell).
Secondly, this is addressing an increasing drift in the story, that concerning both Alex's and various reader's fear that the governments' 'black hats' may drop from their super secret black helicopters at any moment. I hope this puts much of that to bed. Alex will still be cautious, after all, he rightly decided not to reveal his name to Jeff (Caity's companion in Chapter 23) because he didn't trust him, but the idea that the government is a super efficient agency constantly monitoring anything that anyone says is, while largely true, largely a misnomer. They may collect a shitload of data on people (potentially less than Google does, by the way), but the people in the military are hardly a homogenous group, and treating them as if they are each spies with the ready access of 'super secret' agencies in the Pentagon stretches the imagination a bit much.
But mostly, this was a planned reshifting of the story's focus. The story had been drifting towards a big confrontation. I'd brought the issue up a little while ago, asking which way people saw the story going. It was pointed out to me that having Alex play a martyr's role wasn't really in keeping with the rest of the story, so I decided to distance myself from that one plot element. This allows Alex to sideline many of those fears, just as Gail got him to sideline his 'I'm not an angel' speech, and Cate got him to reconsider his 'I'm no one special' stance.
Hopefully that will allow me to stop continuing to talk about those issues, and to get on to some new and more interesting things.
P.S. If anyone didn't notice, I tried to shift the focus in the last two chapters. Whereas the story bogged down in Dallas, that was intentional, as I wanted to show how Alex was overwhelmed and how each person having to hear the exact same thing was getting to him, especially since he couldn't keep up anymore.
However, now that I've gotten past that, I can now just summarize his meeting dozens of new followers every night, and skip over a lot of what he tells new people, which helps move the story along. (Though all bets are off next chapter (#25), when Alex has to try to explain to a bunch of church ladies what he's trying to accomplish--that should be interesting, since he can't say much of his traditional spiel).
|
OPCFW_CODE
|
See country page for more information.
- Longwe, S. H. 2000. ‘Towards Realistic Strategies for Women's Political Empowerment in Africa’, in Women and Leadership, Caroline Sweetman (ed.). Oxford: Oxfam. pp. 24-30.
- Tamale, S. 2000. ‘”Point of order, Mr Speaker”: African women claiming their space in parliament’, in Caroline Sweetman (ed.) Women and Leadership, Oxford: Oxfam. pp. 24-30.
- Tripp, A.M. 2000. Women & Politics in Uganda, Madison: University of Wisconsin Press; Oxford: James Currey and Kampala: Fountain Publishers.
- Butegwa, Florence. 1999. ‘Building Women's Capacity to Participate in Governance’, paper presented at the Capacity Building North and South Links and Lessons Conference, July 1-3.
- Tamale, S. 1999. When Hens Begin to Crow: Gender and Parliamentary Politics in Uganda. Boulder: Westview Press.
- Inter-Parliamentary Union. 1997. Democracy Still in the Making: A World Comparative Study. Geneva: Inter-Parliamentary Union.
- Kabebari-Macharia, J. 1997. ‘Asserting the Right to Political Decision-making’, GENDEReview – Kenya's Women and Development Quarterly. 4. no. 1: 13-14.
- Kalebbo, G. D. 1996. ‘How to Make it to Parliament’, Women's Vision, April 30.
- Uganda Parliament website, http://www.parliament.go.ug/
Inter-Parliamentary Union. http://www.ipu.org/
Inter-Parliamentary Union. “Women in Parliament in 2015: The year in review”: http://www.ipu.org/pdf/publications/WIP2015-e.pdf
Inter-Parliamentary Union. “Women in Parliament: 20 years in review”: http://www.ipu.org/pdf/publications/WIP20Y-en.pdf
Inter-Parliamentary Union. “Women in Parliaments: World and Regional Averages”: http://www.ipu.org/wmn-e/world.htm
Inter-Parliamentary Union. “Women in Parliaments: World Classification ”: http://www.ipu.org/wmn-e/classif.htm
Women's Environment and Development Organization, www.wedo.org
Online Women in Politics, www.onlinewomeninpolitics.org
Note: The sources and additional reading indicated above are mostly only available in English. We welcome recommendations of additional sources in other languages.
|
OPCFW_CODE
|
We all are aware that computers work in languages in which codes are developed. This smoothens our work. Computer languages were developed more than three decades ago. One such language, which is one of the oldest programming languages used, is Python. Though debates still remains on Mac or Windows which is right for you?
It is often said that when you are starting to learn something new, you always want it to be good. For program developers, other than the best free code editors, the first and the best language that they learn is Python. Learning tuples is a part of learning the python language.
Here are common operations using tuples in Python that you should know.
However, before you understand common operations using python, let us know why you chose Python as a programming language.
Why choose python?
Python is currently one of the most popular high-level languages, which Guido van Rossum developed in the 1980s. It has the advantage of being both powerful and straightforward to work with. It is loved because it is shorter and easier to comprehend. Another major reason people choose this language is that it tries to focus on the solution to the problem and not the structure or syntax of the problem.
Let us understand the common operation using Tuples in Python that you should know.
What are tuple and tuple operations?
A tuple is one of the four built-in data types that Python uses to store collected data. Tuple operations are performed on the elements in the tuple data structure.
Some most widely used Tuple Operations in Python
Count Occurrences of an element in the tuple
This method is used to count the total occurrences of an element in a tuple. If the element is not found in a tuple, then the function says 0.
It looks like:
tup1 = (1,2,3,4,5,6, 5, 6, 5, 7, 5)
#count the number of time 5 occurs in the tuple
print (tulp1.count(5) )
Finding the position of an element in Tuple
The index method is used to find the index/position of an element in the tuple. If an element occurs more than once, the function returns to the index of the first occurrence. Also, you have to consider that if you try to find an index of the element that is absent in the tuple, the function shows ValueError.
Joining two or more tuples
It is not a herculean task. You can join two or more tuples by using the + operator.
Converting a string into a tuple
tuple () constructor is used to convert a list by passing it as a parameter to the tuple constructor.
A loop is used inside the tuple constructor to create the tuple.
You can use a comma that unpacks the list inside the tuple and is the fastest among the above methods.
The contents of the tuple can be multiplied using the * operator.
Finding the total number of elements in the tuple
You can find the total number of items in an object by using len () with the tuple.
Finding minimum and maximum elements in a tuple
Minimum elements can be found by using the min() function with the tuple. To get the maximum elements use the max() function with the tuple.
Finding all elements in the tuple
Sum () function is used to calculate the arithmetic sum of all elements in a tuple.
Shuffling a tuple
Tuples cannot be shuffled directly. They are immutable. Tuples can be shuffled only by using lists. It can be done in three steps:
Typecast tuple to a list
Shuffle the list
Typecast list back to a tuple.
Reversing a tuple
This can be done using the slicing technique. This process creates a new copy of the tuple.
Tuple operations in Python facilitate us to perform tasks with minimal lines of code.
We, at OpenGrowth, are committed to keeping you updated with the best content on the latest trendy topics from any major field. Also, both your feedback and suggestions are valuable to us. So, do share them in the comment section below.
|
OPCFW_CODE
|
It is no secret that most high school students struggle with their homework and would do with any assignment help they can find. The amount of work students complete in class and after school is increasing and this has even generated a heated national debate over the years. With so much work and little time left for personal growth, there are concerns that the current crop of students might develop health problems that will emerge later in adulthood.
If you are struggling to cope with high school assignment problems, it is time to find a lasting solution. A good coping mechanism will not only help you complete your assignments on time but also ensure the work submitted earns you a good grade.
The following tips will help you deal with assignments through high school:
- Active Class Participation
- Seek Clarification
- Manage Your Time
- Choose Conducive Environment
- Prepare And Plan For Your Assignment
- Avoid Procrastination
If you want to complete your assignments without a problem, you need to attend every class and participate. Your teacher’s role is to breakdown even the most difficult concepts to help you grasp the basics. You might have noted that students who are active in class hardly struggle with their assignments and this is because they have inculcated all the basics.
Before starting on any assignment writing, make sure you understand what the question is all about. You need to go through the guidelines and if there is any need, ask for clarifications. Teachers are always eager to provide direction to students and you might even get invaluable tips on how to go about the assignment.
Time is a scarce resource and as such, you need to manage it carefully. Most students struggle with homework not because they don’t have the prerequisite skills but because they have poor time management skills. In high school, you have to create a study schedule and allocate assignments enough time.
Every day, set some time aside to complete your assignments and you will never have a problem with deadlines. By setting aside time to complete your assignments, you also prepare your mind for these tasks and it becomes easier to concentrate and do more work.
If you want to complete your assignments on time, make sure you identify a serene place away from any distractions. Now that you have created time for these tasks you need a good place where your mind can work optimally. Before seeking help from an assignment maker, choose a secluded place and if possible, make this your regular base for study. You can for instance choose a corner in the library and identify the best time to settle down for your assignments.
Before settling down and starting your homework, look at the requirements and identify all resources required. Source all these resources before starting the task. If possible, break down the assignment into smaller sections and allocate each some time. Plan for short breaks in your schedule to motivate yourself and also refresh your brain.
If you have an assignment, start working on it at the earliest opportunity. Avoid delays by sticking to your schedule. Your plan should list the assignments in order of priority and this way, you will never have to worry about missing a deadline. On the other hand, you can find a professional who would write essay for you.
If you have just searched the phrase ‘someone to make my assignment’ these tips will help you collaborate more effectively with your assignment expert.
|
OPCFW_CODE
|
One of the things I like about bureaucracy is that it attempts to be fair and evenhanded.
This isn’t always a good thing – badly designed bureaucracies let people fall through the cracks. Bad design can be as cruel than nepotism and favoritism.
But, in general, a good system of rules includes this idea underneath it all:
Rules are applied equally and evenly, regardless of who you are.
In practice, it doesn’t always work that way. Putting aside human corruption and prejudice, someone who has the resources to hire a lawyer will probably fare better than someone who does not. Someone whose parents went to college will have that handed-down knowledge of how the system works, as opposed to a first generation college student.
This problem comes to a head with EULAs and Terms of Service. Particularly when it comes to the “shrinkwrap” some people put at the end of their emails.
You’ve seen it before:
This message, including any attachments, is confidential. If you have received this e-mail in error, please alert the sender; delete the entire e-mail; and do not deliver, distribute, copy, disclose or take any action in reliance upon the information contained herein.Actual boilerplate, used under Fair Use
It’s utterly ridiculous to think that a paragraph of text in an otherwise ordinary email can suddenly limit your ability to take action based on the information in that email.
There’s the copyright/fair use dichotomy, sure. But again, that’s rules that exist outside of the actual message.
Which is why I really enjoy the anti-EULA and its variants. Mine is here: https://ideatrash.net/anti-eula. All it is is the exact same thing, except for one big difference: It releases you from all such bogus contracts.
Yeah, it’s totally laughable. Which is the point.
Again, let me reiterate this: The point is that neither “after-the-fact” restriction is enforceable.
Unlike a liability waiver (which is not always enforceable either), these stupid unilateral agreements are after the body of the message. So either these agreements are enforceable – and therefore the anti-EULA gets triggered – or they’re not, in which case the original boilerplate is not enforceable either.
Again, other rules or contracts may apply. But putting this in the signature of your email is just fine – as long as you accept what’s in the signature of mine.
Before you read it.
Featured Photo by Scott Graham on Unsplash
This is different than, say, a HIPAA privacy requirement, or another contract or NDA that you may have signed outside of an email.
Copyright Disclaimer Under Section 107 of the Copyright Act in 1976; Allowance is made for “Fair Use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use. All rights and credit go directly to its rightful owners. No copyright infringement intended.
|
OPCFW_CODE
|
Virtual Databases – Why They are Necessary, What Their Benefits Are, and How They are Displayed
The concept of the virtual data-base improves the handling of DB information in the technical system information. This blog post describes:
- Former relationship of a technical system to a data-base system without virtual DB
- Problems occurring without virtual DB
- … for multiple components in 1 DB system
- … for replication scenarios
- New relationship of technical systems to DB systems
- SAP Host Agent (SHA) DB monitoring
- Landscape data after the introduction of virtual data-bases
- Scenarios including virtual data-bases:
- System monitoring scenario
- Complex scenarios
- Display of virtual DBs in the LMDB Technical System Editor
Former Relationship of a Technical Systems to a DB System
Figure 1: Landscape data without virtual data-bases – example: Technical system (AS ABAP with Data-Supplier transaction RZ70 and Diagnostics Agent.
The former registration of data-bases worked as follows:
- Before using virtual data-bases, via transaction RZ70 both data-base system and technical system AS ABAP were registered and linked. The data-base system got the same virtual hostname as the technical system (see SAP Note 1052122 – Host names in SLD and LMDB).
- Outside Discovery via the Diagnostics Agent delivered information on data-base system (DB system) and retrieved the 1st (!) existing data-base (e.g. the one registered via RZ70) and linked it to the technical system.
The DB system’s name was composed of DB Name, DB Type and one DB Host from a list.
DBs were searched by these names and, in some cases, matched with e.g. the DB registered via RZ70 and its information was enriched. The hostname is defining the name of the DB. After the 1. match with a host the search ended.
- By that time, it wasn’t necessary to have an abstract relationship between AS ABAP system and DB system. For an SAP Solution Manager with DBA Cockpit Monitoring, this is still not needed, since the DBA Cockpit directly connects to the DB. With the introduction of virtual data-bases, this concept was no longer sufficient.
Problem Occurring without Virtual DB for Multiple Components in 1 DB System
Figure 2: A correct description – example: Technical system (AS ABAP) – with multiple components in one data-base (MCOD) is not possible without virtual data-bases.
Because of the introduction of Multiple Components in one DB, the need to enhance the model emerged: Without it, it was not possible with the former model, to describe a landscape having two systems AS ABAP with differing host names sharing one DB system.
Problem Occurring without Virtual DB for Replication Scenarios
Figure 3: Data without using virtual data-bases – example technical system AS ABAP
Without the virtual DB concept, the replication relation couldn’t be represented in the LMDB in a way that the switch to the new physical DB would be correctly interpreted by LMDB client applications, such as monitoring.
New Relationship of Technical Systems to Data-Base Systems
SAP Host Agent (SHA) DB Monitoring
Figure 4: Data with SAP Host Agent DB Monitoring.
Note: Transaction RZ70 does not deliver information about the physical host / DB system.
In the LMDB the linking between the virtual and the physical database is done using association SAP_IdenticalDatabaseSystem, so that in this context, the information gathered on physical DB can be assigned to the right technical system.
Landscape Data after the Introduction of Virtual Data-Bases
Figure 5: Data when using virtual data-bases – example: Technical system AS ABAP (MCOD).
After the Introduction of virtual data-bases, RZ70 delivers data of the technical system AS ABAP and a data-base with its virtual host (in our example ABC with a different host each).
The data-supplier Outside Discovery via the Host Agent delivers data on a data-base, in our example ABC with a physical host). The physical data-base ABC is linked to the virtual data-bases ABC delivered by the two technical systems transactions RZ70 via association SAP_IdenticalDatabaseSystem.
Also see SAP Note 2499629 – Manual activities in LMDB when switching the Outside Discovery by Diagnostic Agent to Outside Discovery by SAP Host Agent.
Scenarios including Virtual Data-Bases
The Replication Scenario
After the introduction of virtual data-bases the replication scenario can be handled as shown in this example:
- A physical data-base ABC with physical host host_2 for the replication is delivered by the HANA data-supplier and linked to the secondary DB by association SAP_DatabaseReplicationDependency (see figure 6a)
The linked data-base ABC with physical host host_1 becomes the primary DB.
- Replication scenario – e.g. at a DB take-over by the secondary DB in the case of a problem with the primary DB (often temporarily): The former secondary DB becomes the primary DB and consequently also becomes the physical DB related to the virtual DB (see figure 6b).
Figure 6a: Data when using virtual data-bases – example: Replication of a technical system AS ABAP.
This happens by switching virtualization to the other physical host and (temporarily, until DB system ABC on host_1 is available again) switch off the replication (using associations SAP_IdenticalDatabaseSystem and SAP_DatabaseReplicationDependency):
Figure 6b: Status during take-over in case of a DB problem.
System Monitoring Scenario
The following shows primary DB and secondary DB in a monitoring UI:
Screenshot of a monitoring UI of a virtual DB with 3 physical data-bases.
If the physical DB had to be replaced (temporarily), this is handled automatically by using the virtual DB concept. All metrics can be kept.
The reconfiguration after a take-over works automatically for SAP HANA.
Complex Scenarios Including Tertiary DBs
Figure 8: Landscape data after introduction of virtual DBs – example: SAP HANA.
Here, we see a HANA-DB with a technical system AS ABAP and 3 Sites:
- Site_A: 1 primary DB on host host_A
- Site_B: 1 secondary DB on host host_B
- Site_C: 1 tertiary DB on host host_C
Upon a data-base take-over from Site_A over to Site_B:
- The replication from Site_A to Site_B is deleted
- The association SAP_IdenticalDatabase is switched to Site_B:
Site_B becomes the primary DB, automatically.
- Site C becomes the secondary DB.
- Even if the physical DB had to be replaced, monitoring. All metrics are kept.
- Site_A can be added again, later.
HANA Data-Bases in Multi DB Replication Scenarios
Figure 9: Multi DB replication scenario.
For a HANA Multi DB Replication with tenants, one more layer has been added:
A tenant is the view of a customer system on a DB or DB instances, respectively.
Display of Virtual DBs in the LMDB Technical System Editor
Virtual DBs are shown in LMDB of SAP Solution Manager 7.2 and Focused Run for SAP Solution Manager:
Figure 10: Physical SAP HANA DB (BZT) and associated virtual DBs (BZT00001).
For information on configuring HANA and virtual databases, see:
- SAP Help Portal (latest versions are shown; choose another version, if required):
- Configuring SAP HANA for System Replication Technical Scenario in SAP Solution Manager
- SAP HANA Administration Guide explaining the use of SAP Solution Manager for SAP HANA Administration
- SAP Support Portal: SLD Registration for SAP HANA SR Systems
- SAP Archive: For scale out scenarios, see SAP HANAä Host Auto-Failover
For information on landscape data, see Get the Status of Your IT Landscape – Data in SLD, LMDB, and SAP Support Portal, and Its Verification.
it works fine for an ABAP landscape especially in case of Disaster Recovery. We realized such settings last year.
But if I am not misstaken there is a little problem in case of a JAVA tenant, e.g. portal.
The reason is described in the note below
• On the AS ABAP, you can change this host name using the profile parameters SAPDBHOST and SLDSYSTEMHOME.
• On the AS JAVA, you can change this host name using the profile parameter j2ee/dbhost
I am not sure that the profile parameter j2ee/dbhost is sufficient to handle a JAVA secondary host in SLD properly especially for sldreg
please check if this helps and I we undrstood your question correctly:
For Java-based SAP NetWeaver systems, using j2ee/dbhost is sufficient to handle an HA scenario via virtual and physical host names in the SLD:
j2ee/dbhost sets the host name for the AS Java and is automatically sent via the data-supplier, accordingly.
The connection to the respective physical host – which is the key information in an HA scenario – is delivered by data-supplier „ComputerSystem“ (just as it is the case with AS ABAP).
The only difference is that SLDSYSTEMHOME of AS ABAP can assign one more name to the physical host-name.
|
OPCFW_CODE
|
"Many a small thing
has been made large
by the right kind
- Mark Twain
ADSL is the acronym for "asymmetric
digital subscriber line", a phone company technology
that expands the capacity of existing copper telephone
lines. ADSL supports data rates of from 1.5 to 9 Mbps
when receiving data - known as the "downstream"
rate - and from 16 to 640 Kbps when sending data -
known as the "upstream" rate. ADSL requires
a special ADSL modem.
DOCSIS is the acronym for "Data Over
Cable Systems Interface Specification". This is a
standard interface for cable modems. Many Cable Companies
have adopted the DOCSIS standard so that you will
not have to worry about current or future compatibility.
Support for the DOCSIS standard makes your modem portable:
if you move to another part of the country and your
new cable service provider is DOCSIS compliant, you
will be able to use your DOCSIS cable modem for high
speed Internet connections.
Digital Subscriber Line or DSL is a
technology that uses existing 2-wire copper telephone
wiring to deliver high-speed data services to homes
and businesses. Offering users a choice of speeds
ranging from 144 Kbps to 1.5Mbps, the technology provides
Internet access that is 2.5x to 25x times faster than
a standard 56Kbps dial-up modem.
The maximum DSL speed is determined by the distance
between the customer's site and the Central Office
(CO). At the customer premises, a DSL router or modem
connects the DSL line to a local-area network (LAN)
or an individual computer. Once installed, the DSL
router provides the customer site with continuous
connection to the Internet and use of the telephone
at the same time.
Ethernet is a local-area network (LAN)
protocol developed by Xerox Corporation in cooperation
with DEC and Intel in 1976. Supporting data transfer
rates of 10 Mbps, it is one of the most widely implemented
LAN standards. A newer version
of Ethernet, called 100Base-T (or Fast Ethernet),
supports data transfer rates of 100 Mbps. And the
newest version, Gigabit Ethernet supports data rates
of 1 gigabit (1,000 megabits) per second.
ICQ is an instant messaging program
developed by Mirabilis LTD. Prounounced like "I-Seek-You,"
ICQ is similar to AOL's popular Buddy List and Instant
Messenger programs. Many Internet users employ the
program for chat, e-mail, to perform file transfers,
play computer games and more.
Once you have downloaded and installed ICQ onto your
computer, you can create a list of friends, family,
business associates who should also be using ICQ.
ICQ uses this list to locate the members of your list
and notifies you once they have signed onto the Net.
You can then send messages, chat in real time and
Internet Protocol or IP specifies the
format of data packets, also called datagrams, and
the addressing scheme, or where those data packets
will go. Most networks combine IP with a higher-level
protocol called Transport Control Protocol (TCP),
which establishes a virtual connection between a destination
and a source.
IP functions something like our postal system allowing
you to address and a package through the system, but
without a direct link between you and the recipient.
TCP/IP, on the other hand, establishes a connection
between two hosts - or, in our analogy, post offices,
so that they can send messages back and forth for
a period of time.
Internet Relay Chat is a chat system
developed by Jarkko Oikarinen in Finland in the late
1980s. IRC has become very popular because it enables
people to join in live discussions. IRC differs a
lot from web chat in part because it offers a wider
range of functionality. And it's very robust: the
larger IRC networks have thousands of users at once
in thousands of channels.
To join an IRC discussion, you need an IRC client
and Internet access. The IRC client is a program that
runs on your computer and sends and receives messages
to and from an IRC server. The IRC server, in turn,
is responsible for making sure that all messages are
broadcast to everyone participating in a discussion.
ISDN, or "integrated services digital
network" is an international communications standard
for sending voice, video, and data over digital telephone
lines or normal telephone wires. ISDN supports data
transfer rates of 64 Kbps (64,000 bits per second).
Most ISDN lines offered by telephone companies provide
two lines at once, called B channels. One line conveys
voice and the other, data, or both lines together
can convey data at rates of 128 Kbps, three times
the data rate provided by today's fastest telphone
The original version of ISDN employs baseband transmission.
Another version requiring fiber optic cables is called
B-ISDN ("b" is for "broadband") and supports transmission
rates of 1.5 Mbps; it is not widely available.
Short for kilobits per second, Kbps is a measure of
data transfer speed. Note that one Kbps is 1,000 bits
per second, whereas a KB (kilobyte) is actually 1,024
bytes. Data transfer rates are measured using the
decimal meaning of K whereas data storage is measured
using the powers-of-2 meaning of K. Technically, kbps
should be spelled with a lowercase k to indicate that
it is decimal but it is nevertheless widely spelled
with a capital K.
While Internet Relay Chat is the system
for chatting, mIRC is one of the software applications
used for IRC. According to www.mirc.net (where you
can download mIRC), "no one is quite sure what the
'm' in 'mIRC' stands for."
To join an IRC discussion, you need an IRC client
such as mIRC, a program that runs on your computer
and sends and receives messages to and from an IRC
Newsgroups are on-line discussion groups.
There are literally thousands of newsgroups covering
every conceivable interest on the Internet. To view
and post messages to a newsgroup, you need a news
reader, a program that runs on your computer and connects
you to a news server on the Internet.
A Network Interface Card or NIC is
an expansion board inserted into a slot inside your
computer so the computer can be connected to and communicate
on a network. Most NICs are designed for a particular
type of network, protocol and media although some
can serve multiple networks. Note: NIC may
be used interchangeably with, "ethernet adapter".
Short for Post Office Protocol, a protocol
used to retrieve e-mail from a mail server. Most e-mail
applications (sometimes called an e-mail client) use
the POP protocol, although some can use the newer
IMAP (Internet Message Access Protocol).
There are two versions of POP. The newer version,
POP3, can be used with or without SMTP (Simple Mail
RAM stands for "Random Access Memory",
a type of computer memory that can be accessed randomly
- that is, any byte of memory can be accessed without
touching the preceding bytes. RAM is the most common
type of memory found in computers and other devices,
such as printers.
In common usage, the term RAM is synonymous with main
memory, the memory available to programs. For example,
a computer with 8M RAM has approximately 8 million
bytes of memory that programs can use. In contrast,
ROM (read-only memory) refers to special memory used
to store programs that boot the computer and perform
diagnostics. Most personal computers have a small
amount of ROM (a few thousand bytes). In fact, both
types of memory (ROM and RAM) allow random access.
To be precise, therefore, RAM should be referred to
as read/write RAM and ROM as read-only RAM.
The acronym for, "registered jack-45",
this is an eight-wire connector used commonly to connect
computers onto a local-area networks (LAN), especially
Ethernets. RJ-45 connectors look similar to the ubiquitous
RJ-11 connectors used for connecting telephone equipment,
but they are somewhat wider.
SMTP, or Simple Mail Transfer Protocol
is used to send e-mail messages between servers. Most
e-mail systems that send mail over the Internet use
SMTP to send messages from one server to another;
the messages can then be retrieved with an e-mail
client using either POP or IMAP. In addition, SMTP
is generally used to send messages from a mail client
to a mail server. This is why you need to specify
both the POP or IMAP server and the SMTP server when
you configure your e-mail application.
Spam refers to electronic junk mail
or newsgroup postings, more specifically, unsolicited
e-mail advertising for some product sent to names
on a mailing list or newsgroup.
Because spam wastes time and also eats up a lot of
network bandwidth, there are many organizations and
individuals who assumed the task of fighting spam.
However, the very public nature of the Internet does
make it as difficult to prevent spam as it is to prevent
There is some debate about the source of the term,
but the generally accepted version is that it comes
from the Monty Python song, "Spam spam spam spam,
spam spam spam spam, lovely spam, wonderful spam&"
Like the song, spam is an endless repetition of worthless
text. Another school of thought maintains that it
comes from the computer group lab at the University
of Southern California who gave it the name because
it has many of the same characteristics as the lunchmeat
Spam: nobody wants it or ever asks for it.
The acronym for Transmission Control
Protocol/Internet Protocol, TCP/IP refers to the suite
of communication protocols used to connect hosts -
computers - on the Internet. TCP/IP uses several protocols,
with the two most important being TCP and IP. TCP/IP
is the de facto standard for transmitting data
over networks and is thus the standard for communicating
on the Internet. Even network operating systems that
have their own protocols, such as Netware, also support
Universal Serial Bus is an external
bus standard that supports data transfer rates of
12 Mbps (12 million bits per second). A single USB
port can be used to connect up to 127 peripheral devices,
such as mice, modems, and keyboards. USB also supports
Plug-and-Play installation and hot plugging.
|
OPCFW_CODE
|
Show current file feature
Git package had a very convenient feature: "show current file". It's just like GitSavvy's "log current file", but instead of taking you to full commit diff, it just opens current file at chosen revision. It helps you see how a particular file looked in the past, and manually bring back some code. I don't see an equivalent in GitSavvy, would be awesome to have it. That's pretty much the only thing I'm missing from Git.
It shouldn't be difficult to implement as described. I've also considered a view where you can walk back in time for a particular file.
For example, you open the palette, type something like git: show file at..., select a revision, and the view pops up. This sounds like basically what you've described. However, at that point, you could then hit maybe Super+Left, and the view would change to display the revision immediately before the visible one.
Thoughts @maxim?
@divmain This walking sounds super convenient (which is what I love about GitSavvy, you just keep taking things a step or two further). Imho, it's such a clean addition that it doesn't even have to be done in 1st iteration of this feature, can be added later, but yes, definitely would instause it.
I would give default key combo a bit more thought however. Super+Left and some other similar bindings are often taken, but say my Ctrl+Alt+Left is available.
Hmm, I think Ctrl+Alt+Left may effect the screen orientation in Windows 7 and later. How about:
Ctrl+Shift+Left - walk back in time (Windows)
Ctrl+Shift+Right - walk forward in time (Windows)
Super+Shift+Left - walk back in time (Linux/OS X)
Super+Shift+Right - walk forward in time (Linux/OS X)
Problem is, you still want to select/copy/navigate the text in that file, and Super+Left/Right is used to jump to beginning/end of line. Holding shift does the same + selection. So the advantage of Ctrl+Alt is that at least it's doesn't seem to be bound to any standard editor functions, and these keys would still be configurable.
We need more modifier keys! Maybe left/right keys aren't the right thing to use here, even though they're the most obvious.
Yeah, it's hard to pick the keys. Also it's complicated by the the fact that even though the file is non-editable, you still wouldn't want to restrict navigation in it, including for people like myself who use Vintage/Vintageous (vim simulation).
I second this, especially with the walking (next/prev) feature.
How about this for keyboard shortcuts? Ctrl+super+alt as modifiers, and dot and comma as next/prev (or vice versa, whichever makes the most sense).
These keybindings would then be similar to the inline diff view where dot and comma is used to walk through the hunks.
:+1: for the feature, :+1: for using dot/comma to mimic inline diff
Seems like an awesome idea! +1
I love dot/comma idea, but it's best to avoid all 3 modifiers, very inconvenient. Ideal for me would be Ctrl+Super+, (no alt)
It helps you see how a particular file looked in the past, and manually bring back some code.
What exactly would that workflow look like?
pop open file at revision 1234
page forward/backwards thru history
copy stuff
close tab
now we're back to the editable file
paste
Or something else?
@yyyc514 exactly. Also if you're refactoring something, and in the process lost something, this is a great UI to quickly refer to what the logic should've been from before refactor, so you can reproduce it differently in the refactored version.
What's the best way to do this and preserve history across renames? git log --follow can follow the history for a file but git show hash:file doesn't work if the file name does not match the exact name of the file as of the hash you passed in.
That's a good question. I believe the old sublime Git package (which has this feature) doesn't follow files across renames. Would be cool if we could find a way to do it.
Another small detail that could be useful - when walking back-and-forth across revisions, perhaps we should try to preserve the scroll position. The code won't be where it was, and if the scroll is beyond the limit of last revision - we'd resort to just scrolling to the bottom, but imo it's better than not doing it.
That would be a neat trick but probably pretty hard to do without a lot of extra logic (stuff that's already built into things like git blame, etc).
A git blame walker could be a lot more powerful than a git log walker, but you'd be limited to jumping to commits that had some remnants existing in the current revision of the file.
@maxim, preserving scroll/cursor position should definitely be do-able and will be included.
@yyyc514, I've had the same thought about blame. Once this history-walker feature lands, a keyboard shortcut could be added to the blame view to take you to the revision at that time.
In my ideal world, I would see an inline-diff highlighted file as I walked back in time, showing the parts of the file that changed between the revision I'm viewing and its parent. But considering the complexity, I think that is over-reaching for a v1, and I'll probably spin that off into another issue to revisit some day.
preserving scroll/cursor position is 100% do-able and will be included.
Neat, but how do you get that information without git blame? A line could move 1000 lines between commits... I know git blame (the version meant to be parsed) has all that info but didn't think you would get that granular with just commits - without rebuilding all that line counting code yourself.
In my ideal world, I would see an inline-diff highlighted file as I walked back in time, showing the parts of the file that changed between the revision I'm viewing and its parent.
That would be interesting. What I typically want to do is walk a single line back thru it's history inside git blame... so I'm seeing the files as they stood and who did what. I can more about what the file looks like at a single revision than the diff... and if I want the diff it's only a keystroke away. I'll make an issue for what I have in mind.
From #276:
I think it would be useful for chain like this:
show commit > open file > show log of this file
Open file works from DIFF view by 'o' shortkey but doesn't work from COMMIT view.
This sub-feature should be included as part of this effort. For a walk through history, we'll need an show_file_at_commit command. To "open file" as suggested by @dlnsk, we'll need this functionality - the file doesn't necessarily still exist at HEAD.
Note to self:
git log --reverse --ancestry-path COMMIT_BEING_VIEWED...HEAD
And another one feature request:
Will be cool if some shortcut (maybe Ctrl+R) on DIFF/COMMIT open popup with list of files changed in it with the ability to go to this part of diff.
the git: log current file and git: graph current file feature on "master" branch provide a partial solution to this issue.
Was the idea of walking through the history and seeing the (w)diff's for a given file or the repo ever done? I can't figure out how to do it.
|
GITHUB_ARCHIVE
|
summary elements come with accessibility issues, so it’s not an inclusive solution.
I previously explored different strategies for removing layout shift from a progressively enhanced burger menu. Before publishing, I removed one technique that used a native HTML disclosure widget (
<summary> elements). I had a demo and WebPageTest result ready to go. The potential solution was exciting:
- The menu starts closed, which means no layout shift (and an excellent Cumulative Layout Shift metric score)
- Has great overall browser support
Sadly, using a native HTML disclosure widget has accessibility issues.
For the purposes of this article, let’s consider the following code snippet:
<details> <summary> <h2>Main menu</h2> </summary> <nav> <ul>…</ul> </nav> </details>Code language: Handlebars (handlebars)
Now let’s look at why it is problematic.
Using voice control with assistive technology like Dragon Naturally Speaking with Chrome or Voice Control (macOS) with Safari means you cannot use the “Click <text name>” command to activate the
<summary> toggle element. Saying “click main menu” does nothing. Using the element role instead (e.g., “click button” or “click summary”) will also fail with Dragon Naturally Speaking.
With some screen readers, such as NVDA with Chrome and Edge and VoiceOver (iOS) with Safari, the disclosure widget expanded state is not announced, nor is the state change when toggled between a collapsed and expanded state.
VoiceOver offers a rotor feature as an alternative method of navigating to elements on the page. But unfortunately, the
<summary> element is not accessible through any of its menus. The
<summary> element has an implicit ARIA role of “button,” but even so, it is not listed under the “buttons” or “form controls” list. It’s as if it’s not even on the page. Accessing the
<summary> element via the “Next Form Control” shortcut won’t work either.
When using VoiceOver (iOS) with Safari, the role of the
summary element is not conveyed. There is no announcement like “button,” “details,” or “summary,” which means a screen reader user won’t know the element is interactive. Instead, assuming the code snippet above, VoiceOver only announces:
main menu, heading level 2
Children of the
<summary> element can have their semantics removed. If you navigate to a
<summary> element containing a heading, VoiceOver (macOS) with Safari does not mention anything about the heading:
main menu, collapsed, summary, group
Neither does JAWS with Chrome, Edge, or Firefox:
main menu, button, collapsed
Nesting a heading in the
<summary> element is a contrived example. If the child element semantics aren’t important (e.g., a
<span> element), then it should be okay. It can also be argued that this is expected behavior since
<summary> has an implicit role of “button.” But this restraint may not be obvious at first and is also inconsistent depending on the assistive technology & browser combination.
<summary> children semantics are removed, there is no longer a way to use a shortcut to jump to the element. For example, with JAWS, you can navigate to the next heading using the
"h" key. No heading semantics means the ability to use shortcuts is not available.
I consider myself an accessibility advocate, and even so, I very nearly published in my previous article a recommendation to use a native HTML disclosure widget as a technique for a burger menu, unaware of the above accessibility issues. Those of us with the privilege of not depending on assistive technology to navigate the web have an increased responsibility to validate that what we design and build is accessible. I wrote this article in hopes that it might help others as I was having a very difficult time understanding how using a native HTML disclosure widget for a burger menu leads to accessibility issues. I also wrote this article as a reminder to myself to check my own privilege as I work toward the goal of helping create a more inclusive web.
- Details-As-A-Menu by Melanie Sumner (thanks so much, Melanie, for working with me to identify the accessibility problems with
- Details/Summary Are Not [insert control here] by Adrian Roselli
- The details and summary elements, again by Scott O’Hara
- Why <details> is Not an Accordion by Dave Rupert
- Accessibility Support: summary element (html)
|
OPCFW_CODE
|
Getting started in VR
This is a set of resources for people who are interested in designing and developing for VR, but aren’t sure where to start. I’ll be focusing on more accessible technology (specifically, Unity and Google Cardboard) to make this guide useful to as many people as possible. Hopefully this will be a good jumping off point for you and you’ll feel more comfortable working in VR. Let’s begin!
What’s even possible in VR?
I think it helps to look at some examples first, to get a sense of what you can and can’t do.
Giant is a short VR movie inspired by real events in the Yugoslav Wars: you’re not an active participant in the story, but you can look around the room and watch the story unfold. The focus is on immersive story-telling and building empathy, not gameplay.
Tilt Brush is a 3D painting tool. You can create 3D, digital artwork with controllers, and view it from all angles. The possibilities are endless!
The Portal: Aperture Robot Repair Vive VR Demo is a 5 minute game experience where you have to, you guessed it, repair a robot. You interact with the world and you can walk around the robot while you fix it. It’s fun, it’s very polished, and it’s also an example of how VR could be educational: imagine learning about anatomy or engineering this way. Though hopefully less catastrophically.
Finally, Job Simulator is a game where you have to perform mundane tasks in comically bad ways. It’s more focused on gameplay than the other 3 examples, and is completely goofy.
What do I need to get started?
One big question I had when I started in VR was simply, “how do? ? ?” What technology do you need, what does it look like to test it, what does a workflow look like.
Here’s a checklist of things you need at the beginning:
- A smartphone
- A Google Cardboard viewer (~$10–20)
- A normal laptop or PC
- A free Unity account
- Android Studio (if you have an Android) or Xcode (if you have an iPhone). Both are free.
Of course, you can shell out for a real headset instead of a Cardboard viewer for your phone, but I wanted to keep costs as low as possible.
All together, if you already have a modern smartphone and a modern computer, you only need to spend a few extra bucks for the viewer. All the software is free. ~Woo~
What does a workflow look like?
It depends what device you’re developing for, and what software you use, but let’s say you’re making a Cardboard iOS app with Unity, like I am.
You’ll build your experience in Unity, on your computer. Unity is a game engine that lets you make 3D and 2D games. Here’s what it looks like:
You don’t need to know how to code to get started, though it will help. We’ll talk about Unity more in a bit.
Once you’re ready to test something on your iPhone, you’ll “Build” this project and “Run” it from Xcode. The transition between Unity and Xcode doesn’t always feel seamless, but this guide will help you work through it.
After that, the game will automatically be running on your iPhone. To get the VR experience, just pop it into your Cardboard viewer and look around. You’re in VR!
If you have any experience with coding or using software like Illustrator or Maya, some of this may feel familiar. If not, that’s okay. You’ll get the hang of it in no time.
How do I learn to use Unity?
Lucky for you, there’s lots of tutorials!
To start learning more about how VR works in Unity, download Google’s Unity SDK to get a sample project to play around with.
There’s so much you can do in Unity that it can feel a little daunting. I recommend thinking of a simple game idea and then googling every question you have. Break each problem down into bite-sized chunks and piece it together from there.
Like I said before, you don’t need to know how to code, but some familiarity with scripting or writing in C, Java, C#, etc will help. Until then, there’s nothing wrong with copy-and-pasting solutions you find online. Don’t let coding deter you from VR.
How do I learn all the buzzwords and best practices?
VR is full of weird terms (reticle) and weird considerations (don’t accidentally trick the brain into thinking it’s been poisoned).
Unity’s VR tutorial helped me learn a lot of those terms and considerations. You can read more about best practices in Oculus’s documentation, in this article by Timoni West, or in this one by Adrienne Hunter. I recommend taking notes on everything you learn: there’s a lot!
One major thing to consider is how VR affects the brain. Movement feels really disconcerting when your body isn’t actually moving at all: this is why most games are stationary, or have limited movement. It’s also so immersive that you have a greater responsibility for the experience you create. You’re literally changing someone’s reality. Don’t do it lightly.
Do I need to know about 3D modeling?
Of course, if you have a very specific vision and want to make everything yourself, then yes, you will have to learn about it. But to get started or to make really simple games, it’s unnecessary. You can even get or buy assets from the Unity asset store if 3D modeling doesn’t interest you at all.
There’s other 3D software you can use too; Maya is just the one I’m most familiar with. You may want to try out Cinema 4D or Blender instead.
I have more questions!
I bet! If you want to learn more about Maya, Unity, and designing for VR, I’ve been keeping notes on all these topics and making them public. There are more resources and actual guidance in there; I hope it helps.
I also recommend simply diving in. It can seem scary and overwhelming, so follow a tutorial. Follow twenty tutorials. You can and will get the hang of it.
There are meetups in major cities, accounts to follow on Twitter, and lots of resources cropping up online. No one has all the answers yet, but that’s what’s exciting about VR. We’re all learning together.
|
OPCFW_CODE
|
M: ICANN Delays .ORG Sale Approval - watchdogtimer
https://www.icann.org/news/blog/org-update
R: jmccorm
I can't tell if ICANN is trying to publicly clear their own name of any
wrongdoing, or if all the added attention has woken them up to how seriously
flawed this whole ordeal was (which either by policy or procedure they managed
to miss) or both. None of those answers are satisfying. Here's hoping they do
the right thing and kill the sale... but even then, I'd be concerned they'd
restructure the deal and try again.
I thought ICANN was supposed to be the good guys? You know, the responsible
managers of the Internet and providing solid governance which serves as the
best argument against any kind of government intrusion? I'm hoping they didn't
just grow bored with it and decide to get rid of it. As though it was some
sort of dearly loved Google service with mild profitability and little-to-no
opportunity for internal career development. ;)
R: gruez
>I thought ICANN was supposed to be the good guys?
You could say that about every government, yet there's widespread corruption
in most (or all) countries[1]. There's even corruption in democratic
governments that are directly accountable to their citizens every few years
(via elections). Given the way the ICANN board is appointed is... indirect at
best, thinking that ICANN are the "good guys" is wishful thinking at best.
[1]
[https://en.wikipedia.org/wiki/Corruption_Perceptions_Index](https://en.wikipedia.org/wiki/Corruption_Perceptions_Index)
R: arcticbull
To be clear are you suggesting the private sector would do a better job
managing this? After all, we're _criticizing_ their transfer of the TLD to the
private sector. They can't be both the hero and the villain in the same story
can they? Surely I'm missing something?
At least the goal of the public sector is to serve the public whereas the goal
of the private sector is to serve the interests of there shareholders.
If this was a private company on both sides wouldn't we just be hailing it as
a big win for shareholders and/or demanding the government step in and
intervene?
R: michaelt
There are some people who think when it comes to DNS ICANN should basically be
frozen - no new gTLDs, no wildcard DNS, no supporting domain name seizure, no
price or policy changes, and so on. Just delegating to each TLD's providers,
and maybe some policing of providers' behaviour.
If you believe that, _hypothetically_ the best choice would be some sort of
private nonprofit, independent of the government but bound to inaction by
charter.
Whether being directly controlled by government would be an improvement is
somewhat debatable - would lawmakers be reasonably hands-off, like they are
with things like GPS and NIST time services? Or would the opportunity to block
pornhub/piratebay/wikileaks prove irresistible?
R: thaumasiotes
What's the problem supposed to be with new TLDs?
R: michaelt
You can read more at [1] - in short, critics of the new gTLDs would argue
that:
1\. It doesn't deliver the claimed increase the supply of domain names, as no-
one would build a business or brand on whatever.info without securing
whatever.com
2\. It _does_ shake down domain registrants for cash - if you already own
whatever.com you'd better get whatever.info and whatever.sucks before someone
else squats them.
3\. These factors mean .info domains and suchlike are a stereotype of sketchy
sites, which is a negative feedback loop.
[1]
[https://en.wikipedia.org/w/index.php?title=ICANN&oldid=93014...](https://en.wikipedia.org/w/index.php?title=ICANN&oldid=930148901#TLD_expansion)
R: jzl
If ISOC hadn't removed the .org price restrictions, I feel like this could
have been defensible. Farm out the management but hold the new stewards under
a strict leash, fine. (Of course it wouldn't have gone for $1B+ in that case.)
But coupling it with the unrestricted price increases is just indefensibly
corrupt and hopefully illegal given that ISOC was never intended to profit
from .org in the first place.
R: wmf
The people behind this deal are ICANN insiders who know the rules intimately;
in some cases they may have written the rules. It's hard to imagine that ICANN
won't approve the deal.
R: ardy42
> The people behind this deal are ICANN insiders who know the rules
> intimately; in some cases they may have written the rules. It's hard to
> imagine that ICANN won't approve the deal.
If that's the case, maybe the US government needs to reclaim authority over
ICANN, which it had until a few years ago, and assert that authority to
reverse this decision. There needs to be some kind of effective oversight.
R: AsyncAwait
The majority of people don't live in the U.S. Why should the U.S. government
hold such control? If the argument is that they invented the internet, (hardly
entirely accurate), they certainly didn't invent the World Wide Web.
I don't even agree with them having .gov, them regaining .org would be a
disaster.
If anything, some sort of multi-national, strictly non-profit charity should
be setup for this, or if not, the U.N. as a last resort.
R: CharlesColeman
> Why should the U.S. government hold such control?
Because they did until _very_ recently.
> If anything, some sort of multi-national, strictly non-profit charity should
> be setup for this, or if not, the U.N. as a last resort.
That would take _literally years_ , and would be too-little, too-late to deal
with these shenanigans around the .org TLD.
R: AsyncAwait
> Because they did until very recently.
That's not an argument. Any regressive policy could and has been justified in
those terms.
R: CharlesColeman
> That's not an argument. Any regressive policy could and has been justified
> in those terms.
Yes, it is. The US government has an institutional history of overseeing
ICAAN, so it's the most practical organization to take on that role on short
notice.
I think you're letting the perfect be the enemy of the good. Your preferred
multi-national solution would likely take years to negotiate, which means it's
not really even a solution at all, as far as .org is concerned.
R: reitzensteinm
The only sane outcome is clawing back .org from PIR, regardless of what they
intend to do with it now. They've shown themselves to be bad stewards that
view it as purely monetizable asset.
I doubt the will would be there even if it were legal. But one can dream.
R: jdkee
This continuing privatization of public goods is unconscionable and the
leadership of ICANN should be held to account.
R: ohashi
They've been held to... bank account. This is being done by ICANN insiders and
there is no accountability mechanisms at ICANN. The organization is captured.
The .ORG contract renewal wasn't even looked at by the board, it was handled
by staff. It was pointed out letting staff do it avoids oversight mechanisms
(coincidence? probably not). The board oversight group also all had to recuse
themselves to the point there wasn't enough people to perform any actual
oversight. ICANN is a captured and corrupt organization. Very few people who
aren't being compensated to be there spend any time it seems, thus registry
interests win.
R: 3xblah
"Public announcements by PIR, ISOC and Ethos Capital contain relevant facts
that were not required in the request for approval."
What facts?
Why were those facts not required?
Sounds like under the system in place there is little due diligence expected
to be done by ICANN before a decision is made -- none of these public facts
were "required", let alone anything non-public.
At least we know someone is reading the public announcements.
R: echelon
The `.or` tld isn't taken.
1\. Create a non-profit to buy `.or` as a new gTLD. Legally carter it so that
the stakeholders are distributed and it can never be taken private.
2\. Pre-reserve `.or` domains for all existing `.org` domains.
Optional:
3\. Give free lifetime registration to any org stakeholder that redirects
their domain to the `.or` version and/or displays a banner about PIR's
malfeasance.
4\. Any new `.org` registrants are barred from registering a `.or`
5\. Buy back `.org` when it becomes worthless and give it back to the
stakeholders.
R: ReverseCold
I know it's a joke, but two letter domains are reserved for countries :)
R: tialaramex
In practice it would be possible, if it was extraordinarily thought to be
worthwhile, for ICANN to agree with the relevant UN group that this code
should never be issued. The UK code is not available but is "extraordinarily
reserved" and so it doesn't cause a conflict. EU is also reserved even though
the European Union certainly isn't a country.
R: NickNameNick
.uk is functional. Did you mean .gb ?
R: Symbiote
UK is not the ISO code for the United Kingdom of Great Britain and Northern
Ireland, the code is GB.
.gb and .uk are both functional, and .uk as a top-level domain is exceptional.
EU is also not an ISO code, but has been reserved, and is also a TLD as .eu.
R: M2Ys4U
>EU is also not an ISO code, but has been reserved, and is also a TLD as .eu.
`eu` is also exceptionally reserved in the ISO database, which is enough for
ICANN.
The ISO states in their FAQ:
"You can use EU for the name European Union. Please note that this is not an
official ISO 3166-1 country code. The European Union is not a country but
rather an organization. As such it is not eligible to be formally included in
ISO 3166-1. Recognizing, however, that many users of ISO 3166-1 have a
practical need to encode that name the ISO 3166/MA reserved the two-letter
combination EU for the purpose of identifying the European Union within the
framework of ISO 3166-1."
R: joshuaellinger
Their site supports public comments and, for all the noise, I see all of three
comments up there right now. Go comment.
R: glitcher
Those are very thoughtful and informative comments as well.
R: jzl
_" On 14 November 2019, PIR formally notified ICANN of the proposed
transaction. Under the .ORG Registry Agreement, PIR must obtain ICANN's prior
approval before any transaction that would result in a change of control of
the registry operator. Typically, similar requests to ICANN are confidential;
we asked PIR for permission to publish the notification and they declined our
request."_
Unbelievable. The brazenness of this heist continues out in the open.
R: will_pseudonym
For someone who isn't super familiar with the details, on a scale of 0 to
IOC/FIFA, how corrupt is ICANN?
R: ohashi
FIFA level
R: vt240
Has anyone looked at PIR's 990 Forms and has knowledge of business operations
in the sector. It looks like from their expense sheet PIR contracts all of the
actual management of the their TLDs to Afilias. What is the breakdown of
responsibility here? What does PIR actually do?
R: toast0
PIR chooses and supervises the contractor that runs the TLD. I don't know if
maybe, once upon a time, PIR ran the TLD itself, and then decided to contract
it out.
It's not an unreasonable thing to contract out. Afilias runs multiple TLDs,
and there are many operational things that would have a cost benefit from
sharing; for example, running the anycast network of authoritative servers in
as many points of presence as possible --- each PoP requires capital
investment and time investment to setup and run, but servicing additional load
may be possible at a much smaller marginal cost.
R: jumelles
I really hope this isn't too little too late. The sale should be stopped.
R: fierarul
ICANN is just trying to calm down the riot. This won't end well until there's
some formal investigation from the US government.
R: aib
This might be the 3rd-worlder in me talking, but I kinda want the sale to
happen so a better DNS system can rise from its ashes.
R: mobilemidget
Anybody know of large .org sites that reconsidered or perhaps already have
changed their URL(s)?
I do hear from contacts .org are more often not renewed when prompted for
yearly fee, I did not either for my shelved .org domains.
R: floor_
Its cool that my $13 domain is now worth >$750.
R: BucketSort
We should have a peer-to-peer DNS driven by some consensus system like
blockchain. Why do we allow critical pieces of internet infrastructure to be
centralized?
R: jrockway
There is no mandate to trust the root DNS servers. Someone like Google
(8.8.8.8) or Cloudflare (1.1.1.1) could just start registering their own .org
domain names and nobody could stop them.
The resulting shitstorm would be so enjoyable to watch.
R: Thorrez
Previously the CEO of Cloudflare has been against making 1.1.1.1 return
anything non-standard, because even a single instance of that would ruin the
integrity of DNS.
[https://news.ycombinator.com/item?id=19829033](https://news.ycombinator.com/item?id=19829033)
R: jrockway
Yeah, that's a very reasonable stance. I'm not saying it's a good idea to
hijack .org, just that it's more possible than one might think.
|
HACKER_NEWS
|
different accessory
Good morning everybody,
Thank for the great work. I write from italy.
I have a question.
If i want to open two different garage door whit a different Siri name, what can i do ?
I try to created two different file s whit garage door open accessory whit the different Siri name . For exsample: one have garage name and one have basculant name.
When i speck whit a different name the door opened in same Times.
It ' s possibile to created two different accessory??
Help me thanks very mutch.
I'm looking for the same solution. I have 3 Garages which i want to open with Siri and 3 different names. I tried to copy the default garage accessory and renamed it. In the file i changed the display name and the username. But when i start core.js the created accessory won't start.
any ideas?
Thanks for support.
Can you share your garage door opener accessory file? You might need to change the UDID
Here the files i created for Garage2 and Garage3.
Garage2and3.zip
Thanks for your help.
I created new files for you. I see you use my tutorials 👍
GarageDoorOpener2
GarageDoorOpener3
Thank you very very mutch 👌👌👍🏻👍🏻
Inviato da iPhone
Il giorno 15 giu 2016, alle ore 03:40, Amruth Pabba<EMAIL_ADDRESS>ha scritto:
I created new files for you. I see you use my tutorials 👍
GarageDoorOpener2
GarageDoorOpener3
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or mute the thread.
Thank you too! great work.
Now it will parsing the additional doors.
now i have another problem. on iphone i use the app "eve". when i add the first garage door and call it garage door one then siri will open it without any failure and says "garage one is open".
when i add the second door and call it garage door two then siri only says "the garage doors are open" and open both doors.
what app do you use? have you any idea what's the problem?
when i manually open the doors with the button it will work correctly. it must be a app-error?
Thanks for support. very nice tutorial at all.
thank you for great work.i have same problem.
Date: Wed, 15 Jun 2016 11:53:02 -0700
From<EMAIL_ADDRESS>To<EMAIL_ADDRESS>CC<EMAIL_ADDRESS><EMAIL_ADDRESS>Subject: Re: [KhaosT/HAP-NodeJS] different accessory (#248)
Thank you too! great work.
Now it will parsing the additional doors.
now i have another problem. on iphone i use the app "eve". when i add the first garage door and call it garage door one then siri will open it without any failure and says "garage one is open".
when i add the second door and call it garage door two then siri only says "the garage doors are open" and open both doors.
what app do you use? have you any idea what's the problem?
when i manually open the doors with the button it will work correctly. it must be a app-error?
Thanks for support. very nice tutorial at all.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub, or mute the thread.
I was testing it on "Insteon+" and "myHome+" with the same error. Then I renamed the doors to "Garage One" and "Garage Door Two" for testing but nothing changed. Siri shows the correct names on the screen.
To ease this in my situation, I created two scenes for each garage I have:
"Open Garage Door X" and "Close Garage Door X"
I just configured the scene to open or close the corresponding garage. Im not sure about the apps you are using, but the app I use allowed me to configure these scenes. When I tell Siri Open Garage Door X, it would execute the corresponding scene. I've been doing this for months now.
I found the problem.
When i say "garage X open" (in german) siri will understand the command but open both garages.
I have to say "Open Garage X" then it works correctly. the problem was behind the screen. :)
thanks again!
Legotheboss,
Great job on your tutorials! First garage with sensor works great! When I tried your tutorial to create second garage door file, my homekit app does not see the second garage door opener as an option when I tap 'add accessory'
Any suggestions?
I even copied the second garage file from the above coments, still did not work.
Legotheboss,
Great job on your tutorials! First garage with sensor works great! When I tried your tutorial to create second garage door file, my homekit app does not see the second garage door opener as an option when I tap 'add accessory'
Any suggestions?
I even copied the second garage file from the above coments, still did not work.
file:///home/pi/Temp%20copies/GarageDoorOpener_accessory
|
GITHUB_ARCHIVE
|
Force cgo for golang DNS resolver
The default behavior of the golang net package is to use the pure go
implementation which sends DNS requests directly to the servers listed
in resolv.conf. This breaks when running the docker client inside a
docker container that's 'linked' with other containers given that those
links utilize teh /etc/hosts to function. This change forces golang's
netdns to use cgo which uses the system c libraries and respects
nsswitch.
Before this change any short name conflict will result in the DNS query
resolving to a remote host instead of the linked container. For
example, docker will resolve to docker.rc.fas.harvard.edu instead of
the linked container due to the search domain in /etc/resolv.conf.
I should also note this is documented at golang.org
Aren't Docker's official release binaries explicitly compiled with -tags netgo, and thus overriding this choice directly at compilation time?
IMO it might be better to instead document adding -e GODEBUG=netdns=cgo and what the difference is (if we can find a simple way to verify the behavior), or add ENV GODEBUG="netdns=cgo" to the Dockerfile if it's determined that using this behavior by default does make sense (in which case, shouldn't upstream consider adding code to handle this automatically?)
I didn't see where the Docker official release binaries are compiled with -tags netgo but I must have missed it. The way I read the documentation this choice can always be made to runtime using the GODEBUG=netdns=cgo and if it is explicitly compiled that way our tests show that the GODEBUG variable does override this.
My argument against documentation is, using netgo with a search domain can breaks the container linking which is critical to the functionality of the Docker in Docker setup. Additionally it breaks it in a very subtle way in order to save one system thread.
I also wrested with sending this upstream but found some arguments about similar go applications which asserted that this was a problem with the application or language and could be solved other ways. I settle on putting it here because it minimized the impact of the change but I'm totally open to sending this upstream.
IMHO golang should provide cgo as the default netdns implementation and provide a mechanism (e.g. env variable) for choosing the pure GO resolver as an optimization which it clearly is:
By default the pure Go resolver is used, because a blocked DNS request
consumes only a goroutine, while a blocked C call consumes an operating
system thread.
from: https://golang.org/src/net/net.go
Also the docs claim it will switch to cgo under a variety of circumstances including if nsswitch.conf declares something it doesn't support but this clearly has issues:
https://github.com/golang/go/issues/11450
I think this is a worthwhile change unless there's a really compelling reason not to rely on the system C libraries for dns lookups.
It's a good question whether docker upstream should also force cgo...
It looks like -tags netgo was officially added in https://github.com/moby/moby/pull/2162 and fixed for newer go versions in https://github.com/moby/moby/pull/10087. I am doubting they would change it since they use it to be able to create a fully static binary and we try to keep as close to upstream releases as possible. I would suggest just using -e GODEBUG=netdns=cgo when using a search domain.
See https://github.com/docker-library/docker/pull/84, which adds an appropriate /etc/nsswitch.conf telling Go's netgo resolver to check /etc/hosts before doing a DNS lookup, matching Debian's default configuration. :+1:
Awesome!
|
GITHUB_ARCHIVE
|
Google’s Accelerated Mobile Pages (AMP)
There has been quite a buzz about Accelerated Mobile Pages since Google presented it in October 2015. Recently the topic gained momentum as some sources report Google Search would rank AMP pages higher than “regular” pages.
Time to take a quick look at AMP, see what’s behind it and whom it is made for - and for whom not.
What is AMP?
AMP is an open-source framework developed by Google that is relatively easy to integrate in existing web pages to speed up page load near the “magic” 1 second mark.
It’s main promise it that “all accelerated mobile pages should be always decently fast”.
AMP is focused at the content-consumption use case and tries to balance the user experience against typical business requirements such as
- monetization via ads
- user tracking via beacons and analytics
How does AMP work?
AMP applies and enforces several best practices that are around for a while - but are not widely adopted by web developers. For instance,
- load only what is likely to be seen, for instance because it’s in the initial viewport (above the scroll fold)
- download ads with low priority and when the user agent is idle
Rendering Performance Optimization
- resources such as images, ads or iframes need to state their size in the HTML. This allows to render the page layout without downloading these resources
- limited size for style sheets (max. 50 KB) and FASTDOM like optimizations
Content Delivery Network
AMP offers a Content Delivery Network (CDN) so that the HTML document and all resources are loaded from the same origin using HTTP/2.0.
A comprehensive list of optimization can be found on the project’s webpage: https://amphtml.wordpress.com/2015/12/16/why-amp-is-fast/
For whom is AMP?
As mentioned above AMP is designed for content pages. It is not intended for web apps that focus on functionality such SaaS tools, etc.
The adoption rate for AMP in popular content management systems like Wordpress, Typo3, etc. is still poor: an AMP Plugin for Wordpress exists in V0.3 but it has significant limitations and a low / moderate rating. That said, AMP currently requires a web developers.
It is still a quite new framework and definitely work-in-progress. That means it may comes with some flaws and drawbacks that are not obvious when you start. I would recommend using it for selected content pages and roll out after you have gained enough experience and confidence.
AMP vs Facebook Instant Articles
On 12. April 2016 Facebook is going to launch Instant Articles - a tool that addresses the same issue: pages load up to 10 times faster than on the standard mobile web. So what is the difference to AMP?
- Instant Articles work only from within you Facebook app on iOS and Android. Users of other platforms and apps, such as your mobile or desktop browsers, will be linked the web version of the content and thus not benefit from Instant Articles whereas AMP improves performance no matter what tool or platform you are using.
- Instant Articles uses a fixed set of HTML5 elements to structure your content. You can define style templates using a web based Style Editor that allows you to set custom fonts, colors and a few more style elements. However, Instant Articles limit you much more on the design and interactivity options that AMP does.
For further information on Instant Articles please check
Instant Articles FAQdevelopers.facebook.com
I am preparing a short tutorial on how to integrate AMP in your web pages right now. Will post an update here as soon as it’s ready.
For many, reading on the mobile web is a slow, clunky and frustrating experience - but it doesn't have to be that way…www.ampproject.org
|
OPCFW_CODE
|
I have an old program in windows that will export data to a CSV format.
In my new program that I have written on the MAC I want to import the data from
the old database (CSV file) to the new Sqlite database. This will be a one time shot.
The fields do not match between the 2 programs.
Can anyone provide a sample or information on how to do this ?
Jerry, if you just need to convert the old CSV to sqlite, you can simply convert the file with a DB tool for Sqlite.
If you would like to code something, instead, just take a look at the split() function.
I believe you will have more luck if you put some effort into solving the problem yourself and then asking specific questions with areas of difficulty along the way.
easiest way would be to import the data into a temp table. then use an insert statement to map in the temp table into the new table. You can delete the temp table when operation finishes.
@ Jeremy Cowgar If thats the type of help you have to offer why bother. I did try things and you assume I did
not put any effort into it. Who the hell are you to say otherwise anyway. Sorry I’m not up to your standards. I thought
this forum was to get help and to help others when and where you can.
And to you other guys, Thanks for the comments.
Hi Jerry, did you find a solution for what you’re trying to do?
For importing data from other databases I usually do as follows:
I use Navicat Premium to connect to the old database and pull data into a new SQLite database. Navicat does a nice job to transcode data to UTF-8 encoding, which is what I want to have in a Xojo app.
If the data structure changes from the old to the new database, then I write a little Xojo app which selects and adjusts data from the intermediate SQLite database (created with Navicat) to the final database.
Let us know if you need some more specific support.
The old program does not use a database. It saves data to a structured encrypted text file.
Luckily the program will export to a CSV file. There are only 9 data fields in the old program and in my
new program I have many more fields for the same table. I have SQlite Expert Pro and Excel.
I brought all the data into Excel and created the fields in excel to match the fields and then saved
it to a new CVS from Excel and then tried to import it into the SQliteDatabase using SQlite Expert Pro.
The problem is it only will bring in 15 records of 140 and misses 2 of the 15 that do come in.
I’ve been racking my brain for the last to days trying to figure out why and haven’t figured it out yet.
Thats why I asked the question here to see what others have done for importing data from 1 format to another.
I usually save to TAB-Delimited format to make it easier to read and parse the fields.
ReadLine each record
Split on Tab to get an array
Manipulate the data as needed
Save to the new database
As Tim wrote: save the CSV file as a tab delimited file.
One way to read such file would be as in this example, reading a textfile with 3 columns, tabulator delimited:
[code]Private Sub imTextRead()
Dim f As FolderItem
Dim textInput As TextInputStream
Dim AccountNo, AccountName, IsRev As String
Dim rowFromFile As String
Dim txName As String = “accounts_EN.txt”
f = App.ExecutableFile.Parent.Parent.Child(“Resources”).Child(txName)
f = GetFolderItem(“Resources”).Child(txName)
If f <> Nil And f.Exists Then
Dim tab As String = ChrB(9)
textInput = TextInputStream.Open(f)
textInput.Encoding = Encodings.UTF8 //strings are UTF8
While Not textInput.EOF
rowFromFile = textInput.ReadLine
If (CountFields(rowFromFile, tab) <> 3) Then
MsgBox "ERROR ..."
AccountNo = NthField(rowFromFile, tab, 1)
AccountName = NthField(rowFromFile, tab, 2)
IsRev = NthField(rowFromFile, tab, 3)
If Not imAccountAddImportRow(AccountNo, AccountName,IsIncome) Then
MsgBox "ABORT: Unknown error"
You then still would need a method to write each record to your database file (In my example this is done by imAccountAddImportRow)
Because it is good advice. I saw the question was not answered after a period of time and was offering advice on how to get better response from the forums. Generally specific questions get answered quickly vs. broad questions. I was attempting to be helpful, I’m sorry that it came across incorrectly. I did read my response and thought it sounded harsh but was unable to edit the post to lighten it up. About 1/2 of the time I am never able to edit my posts, I should re-read them before clicking the “Post a Reply”. My bad.
Thanks for the help guys. I did end up using what I was using to start with (Excel and Sqlite Expert Pro).
My problem ended up being the import function in Sqlite Expert Pro. It would only read in 1 line at a time.
Still not sure what I did in Excel but it finally worked.
Simon, I did try playing with your routines but just got errors. Something about not existing.
Oliver. Your example could have worked for me but I did not try it as excel worked.
Again thanks for the help guys.
|
OPCFW_CODE
|
Output variables for offline drops
Offline drops don't currently support passing output variables from a deployment package step to another step as part of the same deployment script. Would it be nice to add them?
Output variables are now available as of 2018.3.0.
Warren Rumak commented
I ended up finding a way to implement this feature myself.
The basic idea is, instead of directly using the .ps1 script generated by Offline Package Deploy:
1) Rewrite the .ps1 script to capture the output of each call to calamari.exe
2) Find all the lines starting with ##octopus[setVariable
3) Parse the name and value (they're base 64 encoded) and turn it into "Octopus.Action[$StepName].Output.$VarName": "$varValue"
4) Crack open every .json file in the Variables directory and add those keys/values to the end
5) Steps that follow afterwards will now be able to see those variables.
Now, this is all fine so long as sensitive variables & certificate variables aren't being used. When the variable files are encrypted, they couldn't be rewritten without significant help from Calamari.
Fortunately, Calamari supports loading both sensitive & non-sensitive variable files in a single execution. Unfortunately, Octopus doesn't split the sensitive/non-sensitive variables into separate files when creating Offline Deploy Packages.
It therefore becomes necessary to write the newly-created variables to a second set of .json files that contain the additional properties, then rewrite the calls to Calamari.exe to include both the sensitive and non-sensitive variable files.
Within the next few months this capability is going to become a necessity for us aswell. As this seems to be a very popular feature is there any ETA as yet?
Daniel S commented
Any news on if this can be expected in a future release? Suggestions on how to work around the problem would be nice as well.
Javier Casal commented
My company production environments require offline drops (enterprise security, you know) and this one is preventing us to use Octopus Deploy for all our infrastructure and forcing us to have mixed deployment processes. This is giving us some headaches.
Anders Strom commented
I agree with the other commenters, this is both a showstopper and a major hassle to get around. The Octopus System variables really needs to be made available to offline packages.
I second what everyone is saying. For us, it has become a showstopper. Spent a lot of hours setting up Octopus Deploy for our staging environments but now that i need an offline package, it is failing because of Octopus System variables.
Harald Sømnes Hanssen commented
I was surprised by this problem dating back since 2016, I thought it didn't matter whether a release was either on a live client or an offline package.
For us it is important that there aren't any difference between an offline package and a live package, since all of our scripts are using output variables
Freddy Mello commented
Having used Octopus Deploy happily for almost 3 year for online drops, this limitation came as a big surprise to us.
Now we need offline drop capabilities, but currently it does not fulfill its role and it makes Octopus Deploy rather useless to us going forward.
Fixing this would have a huge positive impact on our deployment story.
Tomasz Rz commented
I totally agree with the others - either you can use offline drops exactly as the online deployments or the offline drops feature is useless. In the end you want it precisely when you cannot do online deployment, which very often means PROD - due to networks/firewalls/proxies/etc. settings developers rarely control.
Big surprise for us, "production" is an isolated environment and therefore Offline package drops becomes pretty useless.
Justin C commented
In environments where we are simply not permitted to use tentacles on every deployment environment (e.g. trying to gradually build confidence in the product for wider use) it is very important that there are no surprises when switching to an offline installer.
Warren Rumak commented
People have the expectation that Offline Package Drop deployment steps will work the same was as if it were being executed from inside a tentacle.
If the behaviour is different, then this feature is useless.
Mark Gould commented
I agree with the others - there are many cases where you need to know the path of a previous step
Johan Allansson commented
I spent some time digging through the Calamari code, and I think that support could be implemented quite easily.
My idea is simply to pass a filename to Calamari as an argument (e.g. outputVariables), load all variables from the file, register any output variables emitted during execution and then write all variables to the file again. The filename could be passed between multiple calls in the powershell script generated for the Offline Package. If the argument is omitted, we simply do nothing and act as before.
I could implement this and do a pull request for Calamari. I cannot fix the generated scripts however, since they are done by the server and not currently available via Github.
Peter Hageus commented
Showstopper for us
Lee Hull commented
This is pretty much required for me to use offline drops since i need to know where it was installed so my powershell scripts can run
|
OPCFW_CODE
|