Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
// This file was automatically generated from
//
// user.go
//
// by
//
// generator -c Profile
//
// DO NOT EDIT
package model
import (
"errors"
"golang.org/x/net/context"
"google.golang.org/appengine/datastore"
)
// UserKind is the kind used in Datastore to store entities User entities.
const UserKind = "User"
// Users is just a slice of User.
type Users []User
// KeyedUser is a struct that embeds User and also contains a Key, mainly used for encoding to JSON.
type KeyedUser struct {
*User
Key *datastore.Key
}
// Key is a shorthand to fill a KeyedUser with an entity and it's key.
func (ƨ *User) Key(key *datastore.Key) *KeyedUser {
return &KeyedUser{
User: ƨ,
Key: key,
}
}
// Key is a shorthand to fill a slice of KeyedUser with some entities alongside their keys.
func (ƨ Users) Key(keys []*datastore.Key) (keyed []KeyedUser) {
if len(keys) != len(ƨ) {
panic("Key() called on an slice with len(keys) != len(slice)")
}
keyed = make([]KeyedUser, len(ƨ))
for i := range keyed {
keyed[i] = KeyedUser{
User: &ƨ[i],
Key: keys[i],
}
}
return
}
// Put will put this User into Datastore using the given key.
func (ƨ User) Put(ctx context.Context, key *datastore.Key) (*datastore.Key, error) {
if key != nil {
return datastore.Put(ctx, key, &ƨ)
}
return datastore.Put(ctx, datastore.NewIncompleteKey(ctx, "User", nil), &ƨ)
}
// PutWithParent can be used to save this User as child of another
// entity.
// This will error if parent == nil.
func (ƨ User) PutWithParent(ctx context.Context, parent *datastore.Key) (*datastore.Key, error) {
if parent == nil {
return nil, errors.New("parent key is nil, expected a valid key")
}
return datastore.Put(ctx, datastore.NewIncompleteKey(ctx, "User", parent), &ƨ)
}
// NewQueryForUser prepares a datastore.Query that can be
// used to query entities of type User.
func NewQueryForUser() *datastore.Query {
return datastore.NewQuery("User")
}
|
STACK_EDU
|
Last week i finally got my first Azure Stack TP2 deployment completed after weeks of error’s as a blogged before. After that i needed to redeploy several times and ran into different issue’s every time.
This time it stopped at step 0.20. After retrying the deployment with:
Invoke-EceAction -RolePath Cloud -ActionType Deployment -Start 0.20 -Verbose
It stopped at the same error. See below for te error message.
2016-11-08 20:45:06 Verbose VMs to create: MAS-BGPNAT01
2016-11-08 20:45:06 Verbose Updating management nodes for HyperConverged deployment.
2016-11-08 20:45:12 Verbose Skipping deployment of the VM named 'MAS-BGPNAT01'. It is accessible via remote Powershell.
2016-11-08 20:45:12 Verbose Waiting for the following VMs to be remotely accessible: MAS-BGPNAT01.
2016-11-08 20:45:13 Verbose The VM 'MAS-BGPNAT01' has successfully started.
2016-11-08 20:45:15 Error Task: Invocation of interface 'Deployment' of role 'Cloud\Fabric\VirtualMachines' failed:
Function 'Add-GuestVMs' in module 'Roles\VirtualMachine\VirtualMachine.psd1' raised an exception:
The WS-Management service cannot process the request because the XML is invalid.
at Wait-VMPSConnection, C:\CloudDeployment\Roles\VirtualMachine\VirtualMachine.psm1: line 1683
at Add-GuestVMs, C:\CloudDeployment\Roles\VirtualMachine\VirtualMachine.psm1: line 265
at <ScriptBlock>, <No file>: line 18
2016-11-08 20:45:15 Verbose Step: Status of step '(NET) Deploy BGP VM' is 'Error'.
2016-11-08 20:45:15 Error Action: Invocation of step 0.20 failed. Stopping invocation of action plan.
2016-11-08 20:45:15 Verbose Action: Status of 'Deployment-Phase0-DeployBareMetalAndBGPAndNAT' is 'Error'.
The MAS-BGPNAT01 VM was accessible and i didn’t notice any errors in relation to the error above in the eventlogs. After a reboot of the MAS-BGPNAT01 VM I started the deployment from step 1, without the -start parameter:
|
OPCFW_CODE
|
Development framework including code generator and UML ex/import
A component based programming framework. This project is aimed to support various target frameworks. A wxWidgets based GUI application is the major sample which also provides rapid database GUI design with UML import and export (db reverse engineering).
A cross-platform terrain rendering engine that uses advanced techniques such as dynamic tessellation to render complex landscapes at high frame rates. Suitable for use in games, engineering, simulation, etc.
Essential Budget is a graphical personal finance manager designed for efficiently tracking home finances. It is currently the only open-source personal finance manager to implement mature budgeting support. Cross-platform using Java and Eclipse's SWT.
FreeCLX is the Open Source project for Borland's CLX Component Library for Linux. A Borland No-Nonsense licensed version of CLX is available in the Desktop and Server Editions of Kylix. A Borland Kylix compiler is required to build FreeCLX.
The "gopchop" utility is a video editing tool designed to take MPEG2-PS files and perform Group-of-Pictures editing ("GOP-accurate editing"). No re-encoding is performed, which drastically improves editing speed.
Decouple your GUI building code from the rest of your application. Using an XML description, the Java Gui Builder will build appropriate windows, controls and objects for later retrieval by the mainstream code.
Lights Out was originally (at least to my knowledge) a handheld electronic puzzle game by Tiger Electronics. I decided to implement this game in Java/Swing and add lots of cool features that could not possibly be done in the handheld game including: *
Linux System Diagnostics (LSD) is a project to develop an OpenSource diagnostics package for Linux (i386), simular to PC-Doctor and WinMSD. It will encompass utilities to view system resources, as well as test system hardware.
The goal of this project is to develop an open-source implementation of the management application (and drivers if necessary) for Creative NOMAD MP3 player (the first, parallel port model).
Interpreter Pascal in Delphi
We're at the start of an interstellar war. Not only will you have to try to defeat the enemy, you'll also have to do a good job in managing your planets, keeping your fleets up-to-date and gathering intelligence on the enemy race. You'll have to expan
Xylo is a simulation software that emulates a vibraphone, a musical instrument that belongs to the xylophone's percussion family. It uses a WiiMote to track the 2D positioning of two ir-emitting sticks.
A java based network and firewall simulator that allows users to set and test the effectiveness of firewall rules using either a command line interface or graphical user interface.
AGISim is a framework for the creation of virtual worlds for artificial intelligence research, allowing AI and human controlled agents to interact in realtime within sensory-rich contexts. AGISim is built on the Crystal Space 3D game engine.
Final Frontier Trader is a 2D single player space strategy, combat, and trading game. You pilot a starship which is upgradable. You can buy, sell, or trade parts and even new starships. You can even join a fleet for experience in missions and combat.
GPLIGC is a program for analysis and 3D visualization of GPS tracklogs (in igc format) as recorded by flight data recorders used by glider pilots.
JSDL is a java binding to the SDL library
Pears is a three-pane newsfeed (RSS/RDF/Atom) aggregator which caches downloaded feeds for offline use. It has a clean, uncluttered interface, it's easy to use and works on Windows, Linux and MacOSX. You can extend its functionality with plugins.
PyEclipse is a Python plugin for the Eclipse platform.
Simured is a multicomputer network simulation whith visual interface to see packet movement on the network. It is multi platform and there are versions in Java and C++.
digSetStat helps tracks volleyball stats (serving, hitting, blocking, defending, serve receive) during a match. It facilitates the job of the stat-taker by providing them with photos of the relevant players in the correct layout.
eFTE is a lightweight, extendable, folding text editor geared toward the programmer. eFTE is a fork of FTE with goals of taking FTE to the next step, hence, Enhanced FTE. The upstream for the source code is on GitHub as of September 2015.
A.T.Edit is a Tcl/Tk based Text editor. It works on X Window System and Microsoft Windows. Also it works as front end of IBM English-Japanese translation software (for Japanese).
A C++/Qt program for use in encrypting and decrypting simple substitution cyphers. These cyphers are often found in newspapers and various puzzle books.
|
OPCFW_CODE
|
How can I find out what happened to my Area 51 proposal?
I made an Area 51 site proposal a few days ago. Today I logged into Area 51. I don't see any trace of the proposal in my profile, through search or anywhere. I'm guessing the proposal might have been deleted. In any case, how can I find out what happened to the proposal?
In terms of "how", can you verify if the proposal appears under your own profile? http://area51.stackexchange.com/users/156437/fiksdal
@RobertCartaino It doesn't. That's the first thing I checked.
I will attempt to get that addressed in the next round of Area 51 updates.
@RobertCartaino You want to implement that one should be able to view one's own proposals, even after they're deleted? That sounds like a good idea.
Usually when you can't find your proposal it means it got deleted, as the other answer already says.
To be sure, you can try Google, it will usually have a cached copy of the proposal.
To find out why it got deleted, you can ask in Area 51 Discussion Zone, which is the equivalent of its meta site.
Example for such a question: What happened to 'content creators' proposal? which is most likely what happened in your case as well: no five example questions, or 5 followers, in three days.
If you're talking about the Scandinavian Languages proposal—yes, it has been deleted. You can ask one of the 10k+ users for more details—they are able to view deleted proposals.
You mean 6 users, out of which only one is active.
@ShadowWizard What do you mean? OP didn't mention any number?
@Fiksdal I mean take a look in the list of users you linked to. Count how many have 10k+ reputation. Check their profiles. Only one was active in the last couple of days, the rest weren't around for weeks, months, and years. (so "asking one of them" is not really relevant.)
@ShadowWizard But they are active on other SE sites.
@ShadowWizard But OP didn't say anything about how many there were?
@Fiksdal my point is, there are very few users with 10k rep. Chances one of them will be willing to take a look are not so high, especially when they're not active on Area 51, meaning they probably won't really care what's going on there. Plus, asking them about every deleted proposal (there are many) will most likely get them... not so happy.
@ShadowWizard Oh, I see. Yeah. It's quite impractical, really. I wish proposals could just be closed rather than deleted, and that users could see the closure reason. I saw a feature request about that and upvoted it.
Fiksdal, would you mind posting a link to that feature request? Thanks!
@Sue Close, rather than delete inactive Area 51 proposals
|
STACK_EXCHANGE
|
I have a problem here in the CODE below, where the SEQ Dataset in DD statement - PDSMPDS contains a number of PDS with many members in each. What I wanted to do was use PDSM18 to update all members in all the PDS's in the PDSMPDS DD file but, but it fact what it does is update any instances of the target strings (&TARGn) with the replacement strings (&REPLn) in the sequential file itself.
I have read the CHAPTER 2:String Scan and Replace for PDSM18 (from PDSMAN_Productivity_Tools_Ref_ENU.pdf) and it doesn't look like I can pass in a file with PDS to scan and replace.
This is part of an automation project, and I don't want to go hardcoding the PDS names, concatenated in the PDSMPDS DD statement as the job will fail if a particular one specified does not exist.
If I can update the current structure to work great, but another solution would also be great.
An example of the QA.&USERID..DSNLIST file follows the code, there can be a lot of PDS in this file eg. 60+.
//* Parse the DSN LIST from the USERID.temp.dkillson file created
//* when we used the dz CLONE utility, put these DSN into a new file
//* that will be used as input in the next step when we REPLACE old
//* complex strings with new complex strings.
//DSNLIST EXEC PGM=IKJEFT01,DYNAMNBR=30,
//SYSEXEC DD DSN=QA.SYS.AUTOMATE.REXX,DISP=SHR
//INDD DD DSN=&USERID..TEMP.DKILLSON,
//OUTDD DD DSN=QA.&USERID..DSNLIST,
//SYSTSPRT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSTSIN DD *
//* REPLACE target strings with new strings in all members in all PDS's
//* contained in input file in the PDSMPDS DD statement.
//REPLACE EXEC PGM=PDSM18,PARM='.ALL'
//PDSMPDS DD DISP=SHR,DSN=QA.&USERID..DSNLIST
//PDSMRPT DD SYSOUT=*,OUTLIM=1000000
//SYSIN DD *,SYMBOLS=JCLONLY
Joined: 06 Jun 2008 Posts: 8570 Location: Dubuque, Iowa, USA
any help would be greatly appreciated
The responses on this forum are voluntary and given when / if people have time and knowledge of the subject. Nagging for faster assistance after just 2 hours will NOT get you faster results, and frequently such nagging causes the people who WOULD respond to not do so in the expectation that your appreciation will be lacking. If your problem is so urgent, convince your management to hire a consultant to assist you.
I haven't used PDSMAN a lot, but none of my experience indicates that what you want to do is possible. I'm not sure but there may not be ANY products on the market today that allow you to dynamically allocate data sets from a sequential file and scan them.
You could write a program in the language of your choice to parse your sequential file and generate the JCL, which then could be submitted to the internal reader. If you want to get fancy, your program could check for the existence of the data set and not write JCL if the PDS doesn't exist -- but that would depend upon which language you write your program in.
specifying a sequential dataset on the PDSMPDS DD that contains a list of PDS's is NOT a concatenation. PDSMAN just sees the file name on the DD and processes that. You have to specify each individual name as a real concatenation on the PDSMPDS DD, and I'm not even sure if PDSM18 will accept that, I have never tried it.
e.g. this is a proper concatenation ..
//PDSMPDS DD DISP=SHR,DSN=QA.ZZ140A1C.SOURCE
// DD DISP=SHR,DSN=QA.ZZ140A1C.TAPELOAD.CNTL
// DD DISP=SHR,DSN=QA.ZZ140A1C.TEST.INCLLIB
..etc for all the files in your list
It would be possible for the concatenation list to be a JCL INCLUDE member which is probably more like what you intended. You could read the JCL manual to understand how that works.
|
OPCFW_CODE
|
- Similar itemBoasting a bright shade of turquoise suede, these Sergio Rossi platform pumps are sultry and sophisticated - Almond toe, covered platform - Ultra-high heel - Pair with an elevated jeans-and-tee ensemble or a slinky cocktail sheath for evening
- Similar itemSnake-Embossed Leather Crossbody Bag, Nude Details Romy Gold snake-embossed leather crossbody bag with silvertone hardware. Doubled snake chain and leather shoulder strap; 22" drop. Zip top closure with tassel pull. Interior, twill lining; two zip pockets. Exterior, two front zip pockets with tooth-shape pulls. Back slip pocket; Romy Gold logo plate at bottom center. 7 1/2"H x 7"W x 1 1/2"D. Imported.
- Detachable woven chain strap Top zip closure with tassel pull. Exterior features front quilted logo detail. Interior features slip wall pocket. Dust bag included. Approx. 6" H x 7.5" W x 2" D. Approx. 25" strap drop. Made in Italy. Materials: Genuine leather exterior, textile lining.
- Similar itemGiuseppe Zanotti black suede peep-toe Sharon pumps styled with a self-covered platform and sky-high stiletto heel. 5.75"/145mm heel; 13mm platform (approximately). Suede-covered platform and stiletto heel. Slips on . Leather lining . Leather sole. Available in Black . Made in Italy.
- Similar itemShow off your feisty side in this daring pump by Vince Camuto. A chic leopard print fur is decorated by bold spiky studs through out the entire upper. A spunky 5 inch heel and sturdy 1 inch platform finish off this fierce closed toe pump. Man Made Upper/Man Made Sole. multi-color.
- Similar itemGucci. Soho Round Leather Crossbody Bag, White. Gucci crossbody bag in pebbled leather. Golden chain strap with 21" drop. Embossed interlocking G logo. Zip-top closure with tassel pull. Inside, linen lining; one slip pocket. 5.5"H x 7"W x 1"D; weighs 1.1 lb. Made in Italy.
- Similar itemGucci light pink 'Lisbeth' patent leather platform pump, black piping around the heel, 135mm heel with a 25mm platform and made in Italy.
- MICHAEL Michael Kors Small Fulton Logo Crossbody Bag, Brown Details MICHAEL Michael Kors crossbody bag. Golden hardware. Adjustable leather and chain detail strap; 24" drop. MK circle logo on snap-flap closure. Inside, one open pocket and Michael Kors embossed leather tag. 5. 5"H x 7"W x 2. 5"D. Imported. Designer About Michael Kors: Famous for producing polished, sleek, sophisticated American sportswear with a jet-set attitude, fashion designer Michael Kors launched his line in 1981. Today his coveted collection includes a full range of shoes, accessories, handbags, jewelry, timepieces, and fragrances. Color: BROWN.
- Similar itemExtravagant snake embossing brings textural intrigue to a burnished leather handbag accented with brushed goldtone hardware. Brand: Brahmin. Style Name: Brahmin 'Mini Duxbury' Crossbody Bag. Style Number: 741533. Top zip closure; Optional, adjustable crossbody strap; Exterior slip pocket; Interior zip, wall and cell-phone pockets; Logo-jacquard lining
|
OPCFW_CODE
|
Here is how to get a basic WordPress site up and running using WP-CLI. This is particularly helpful for spinning up local WordPress installations for development of plugins and themes. You can also use WP-CLI to get started with unit testing your WordPress plugins with PHPUnit. For now, let’s see about getting WordPress installed.
Setup Default Configuration File
I primarily use WP CLI for local development of plugins and themes and have some standard configuration options that I put in my `~/.wp-cli/config.yml` file. This saves some typing. Here is an example of my `config.yml` file:
core install: admin_user: devuser admin_password: devpass admin_email: email@example.com title: WordPress Development core config: dbuser: my-db-username dbpass: my-db-password dbhost: localhost extra-php: | define( 'WP_DEBUG', true ); define( 'WP_POST_REVISIONS', 0 );
Formatting The config.yml File
The format of the configuration file mirrors the WP CLI commands. For example, if you are setting values for the command `wp core install` then your heading will be `core install:` and below that you provide values for the install command. Likewise, to provide values to use with the `wp core config` command, your heading will be `core config:` and then you list the values used with the config command.
You can store a configuration file like this in several locations depending on how you want to use it. I use these same settings for many of my development sites, so I keep it in `~/.wp-cli/config.yml` directory which is the `.wp-cli` directory in the home directory on my development laptop.
First, let’s download WordPress to the document root of our web site. We will do the following three things:
- Make a directory for our new WordPress website called `wordpress`
- Change into the new `wordpress` directory we created
- Use WP-CLI to download WordPress
mkdir ~/sites/wordpress cd ~/sites/wordpress wp core download
Configure WordPress Installation
Next we need to set the values for the `wp-config.php` file like the database connection information. The database name you specify does not have to exist yet. We’ll use WP CLI to set that up in a moment. We only need to provide a database name an not any of the other database connection information like the username, password, and host because that’s coming from your `~/.wp-cli/config.yml` file. If you do not have those settings in that file, you would need to provide them on the command line.
wp core config --dbname=my-database-name
(NOTE: Remember to change `my-database-name` to the actual name of the database you’d like to use.)
Create The Database For Your Site
WP CLI has a command that will look in the `wp-config.php` file (that we created earlier) and use the database connection information to create the database for your WordPress site. In order for this to work, the database user that you specified needs to have permission to create databases on your system. In my local development environment, I have a single database user that has access to all my database and also has permission to create new schemas. A live / production setup would be more secure than this and would have it’s own database user with limited permissions. This is very convenient for development purposes, and WP CLI can take care of it all in one quick command:
wp db create
You don’t need to provide any of the database credentials because they are already in the `wp-config.php` file.
Create The WordPress Database Tables
Now we’re ready to create the WordPress database tables for our site. Again, many of the required parameters are getting pulled from the `~/.wp-cli/config.yml` file. So the only parameter we have to provide is the URL for where the site will live.
wp core install --url=host.domain.dev
The URL you provide can be any local URL, it doesn’t have to be a live URL that is publicly accessible.
We have used WP CLI to download, configure, and install a WordPress site. Keeping in mind that some of the values for the commands we issue come from your `~/.wp-cli/config.yml` file, here are all the commands you need to get up and running with a fresh WordPress site installed with WP CLI.
mkdir ~/sites/wordpress cd ~/sites/wordpress wp core download wp core config --dbname=wordpress wp db create wp core install --url=host.domain.dev
As you can see, once you have WP CLI setup up, that’s a quick easy way to get WordPress installed and hopefully it will save you a lot of time.
If you are interested in more details about this, check out:
|
OPCFW_CODE
|
A Better Let's Encrypt Client 2015-12-07
Let’s Encrypt recently entered public beta, providing free TLS certificates for everyone, forever. We should pause for a moment to consider how important that statement is. Okay, still with me? Let’s Encrypt works by verifying that you control the domain you are requesting a certificate for by giving you a random token and then making a request against the domain and expecting to get that token back. This is commonly called a Domain Validated (DV) or Domain Control Validated (DCV) certificate, and has been the norm for TLS certificate authorities for a while now. A new thing with Let’s Encrypt is their DV process is actually fully specified in a protocol called ACME, though Let’s Encrypt doesn’t currently implement the full ACME spec.
What I Want
I write tools for a living. This means I write software that gets used in wide variety of environments and use cases, and I have to be very careful about the assumptions I make. What I want out of Let’s Encrypt (and future ACME-supporting certificate issuers) is something like this (using Chef’s DSL as an example):
tls_certificate '/etc/something/cert.pem' do hostname 'example.com' end
An abstract description of where to put the certificate and what hostname to request it for, and then my code takes over and handles all the details so the user doesn’t need to know or care.
In building this I need to plan for some restrictions on what I can and can’t do. In most situations, something will already be listening on ports 80 and 443, and I won’t know what it is. It might be Apache, or Nginx, or HAProxy, etc etc. As I don’t know what software is bound to those ports, I also don’t know how it is configured and can’t assume I can serve files through it. It might be a dedicated proxy service or webapp container with no capability to serve files, for example.
- Can’t listen on ports 80 or 443.
- Can’t serve files or interact with whatever is already on those ports.
- Must be able to respond to an ACME DV request with a provided token.
Let’s look at the options I’ve come up with so far.
Listen On A Different Port
The easiest solution. No muss, no fuss, whatever is on port 80 keeps on doing its thing while we use port 81 (or something else below 1024) to run the DV process. Unfortunately this is currently impossible as the ACME spec does not allow using alternate ports. I’m hopeful the spec will be amended in the future, but that doesn’t help me right now.
Shut Down Whatever Is Using Port 80
Again a very simple solution, just
service stop whatever is using port 80,
run our DV, and then start it back up. I mention it mostly for completeness because, as
mentioned, we have no way to know what the thing on port 80 is and even if we
did I doubt anyone would want their website being randomly shut down for a
minute every few months. This seems to be the expected use case for the
standalone mode in the
letsencrypt client tool.
From here we have to get creative.
Iptables REDIRECT For DV Request
Linux’s iptables has a feature that allows rewriting the destination port on a packet. This allows things like transparent interception by Squid and other proxy tools, but we could use it to find the packets containing the DV request and quietly move them to another port to talk to a service we control. This would be entirely Linux-specific, but at this point I’m okay with that as we would need alternate ports as described above to have a cross-platform solution.
The trouble is in only redirecting the DV traffic. The simplest way to do this
would be to use the source IPs that correspond to Let’s Encrypt in the iptables
rule, but Let’s Encrypt has veto’d this solution.
Another possible option is to use iptables’
string matching module, but
redirect rules have to run very early in the firewall processing and it seems
to be before the packet contents are available. It is possible that there is a
way to execute on this, but I’ve not been able to find it. If you know the
right iptables voodoo please let me know.
Iptables REDIRECT For All Traffic
Failing a more specific rule, we can still use a redirect to rewrite all traffic
coming in on port 80 to go to our service instead, and then proxy everything
except the DV request. This works, but has the downside of putting things behind
a possibly unexpected proxy. The remote IP on the web app request will temporarily
be localhost instead of the true client IP. There are ways to cope with this (
PROXY protocol), but an app caught unawares could break in
new and exciting ways. Additionally there might be performance concerns in
sending all traffic through a proxy likely written in Ruby.
Like before, it is possible there is a way to re-inject the packets in to the kernel preserving the original source IP, but I’ve not found it. Linux’s TPROXY system is close, but requires some complex network topology to make it work.
Use Libnetfiler_Queue From Ruby
provides a way for a userspace process to insert itself into the firewall process.
This is also Linux-only, and somewhat more bespoke than iptables redirects, but
seems to be supported on all the distros I checked. It works by registering
with the kernel and then processing each packet as it comes in, deciding to
accept, reject, or alter each packet as needed. Now the fun part; Chef’s client
omnibus packages include both libffi and the
ffi Ruby gem, allowing for me
to directly call C APIs without installing a compiler (I take it as a given
that I can’t request a compiler). We could potentially have Chef connect as the
active netfilter queue, watch for the DV request packets, and rewrite them to a
new port as needed.
There are, however, some pretty steep downsides. The data you get from the kernel is a raw TCP packet as a byte array. This would mean parsing both TCP and HTTP before we can even get to the data we need, not to mention this needs to process not just the traffic on port 80 but every packet the machine is getting. I am not normally one to jump on the “dynamic languages are slow” bandwagon but even I shy away from a performance problem of this magnitude. Additionally this would require installing the userspace libnetfilter libraries via the OS packaging tools, which would be generally unpleasant and would likely make supporting the long tail of weird Linuxes more difficult.
Use Libnetfilter_Queue From Go
So if Ruby is probably too slow and we don’t want to have to install the
userspace library that means we are looking for something that is relatively
fast for processing string-y data, supports static compilation well, and can
easily call C APIs. The only thing I know of that fits all three criteria is
Go. This would mean writing the netfilter code in Go, building a static binary
(or two, 32 vs 64 bit), including it via a
cookbook_file resource, and just
hoping that the whole house of cards doesn’t come tumbling down.
The upside is significant though, this could provide a truly transparent solution.
These are the possible solutions I’ve come up with so far, and everything is either impossible or seems like a terrible idea. I would love to see either alternate ports or static IPs for the validators as either would allow for a reasonably elegant solution. Without either, I’m all ears for better options.
Which To Use?
And now we reach the audience participation section. I would really like to write this letsencrypt cookbook with the simplicity I mentioned above. Doing that today requires I pick between one of the aforementioned options (or hope someone else has a better one). Of all of those only two seem workable:
- Iptables REDIRECT all traffic through a temporary proxy.
- Use libnetfilter_queue from Go and distribute a static binary.
The first has the
REMOTE_IP issue I mentioned, but is far simpler and seems
less likely to explode hilariously. If I went this route I would have to clearly
document the restriction and offer the ability to configure forwarding headers.
The second could be truly transparent but requires a lot of complex
|
OPCFW_CODE
|
Kudos to Marcotronic!!!
I think he should get an award or something.
Make sure that the script is run before renderings. also try and regenerate the map using GUV tiles or maya auto mapping. Otherwise it should be working. I am not on a machine with Maya at the moment os I cant check the scene file.
mike0006 vbmenu_register("postmenu_436556", true); : I agree. Marco found the change in the mess of documentation on the Autodesk site, a massive undertaking. Dude I owe you a beer or something.
I got your PM but I havent seen a reply yet. Did you get it working? If the script reports nothign back it means it worked. if you get an error it usually mean it has run and there is not a ccmesh to work on.
It may all be moot at this point. Check out Marco's thread for an update on the Service Pack to Maya 2008.
thanks for hot Scott Spencer!
sorry,i have used you supported script to work before maya sp1 publish.but cant get a good fixed. it's unknow.
I Have fixed the displacement seams use maya sp1. and some new error appearing. some spine on the UV border.and the displacement losted a lot of details of skin bump.
Last edited by zsz98781; 03-11-08 at 07:09 PM.
for avoid some spines ,i reduced the uv border at 8 to 2 in ZB-DE3,but the displacement seams is returning as a nightmare! when i using your supported script to create a ccmesh node at the 'mentalraySubdivApprox1'.but theTriangulate face become some holes and seams be fixed. maya2008sp1 myr_maya2008_sp1_releasenotes.pdf has mentioned "for ccmesh objects,fixed texturing on seams",but the hole is appearing.
hant the good way to fix any all!! spines, seams and holes.
Last edited by zsz98781; 03-12-08 at 03:58 AM.
Try MeshLab. It has many filters that let you close holes, clean,decimate triangle & polys, convert files, etc. It likes to crash on large files which is a pain but it's free.
sorry! you hant understand my meaning.
when i make my model by marcotronic supported a MEL to convert "ccmesh" and rendering by MR. but some holes is appearing.it is a phenomenon in rendering .cant use your meshLAB to fix ?
sorry, I donīt know if I have understood you right - Iīm a bit confused now what you did in which order... But you donīt need to do anything concerning that ccmesh stuff when you use Maya 2008 with SP1 - no need to run that MEL script for the ccmesh fix...
'no need to run that MEL script for the ccmesh fix...'??? i havd intalled Maya 2008 with SP1 .but ..seams still here~~~~~
Last edited by zsz98781; 03-12-08 at 05:30 PM.
mhhhh.... thatīs strange.
Are you sure you have installed SP1 correctly? Does Maya show "Maya 2008 Service Pack 1" in the splash screen when starting Maya? At first I had installed SP1 without de-installing Maya 2008 and thought it just had updated to SP1 but that wasnīt the case - you have to de-install Maya before installing the SP1 package. Did you do that?
Last edited by marcotronic; 03-13-08 at 01:31 AM.
yes ! i have to de-install Maya before installing the SP1 package.is flash screen
perfect seamless head. i also used this script in conjunction with "zbrush to maya video tutorial".
Can someone who has installed service pack 1 (or has extension 2) tell me what the mental ray screen says in the output window when you load maya? The date of the mental ray version?
Is the fix inside the SP1 for maya or in the mental ray pack?
I dont have access to a machine with either at the moment--but I tried a 32 bit displacement on a machine with extension 2 and still got that wonderful seam problem. I had to use the Scott script to fix it, though the rendering time was much faster(and it seemed to support tifs better?).
Last edited by kgg777; 03-16-08 at 10:22 PM.
|
OPCFW_CODE
|
It has been a few weeks since Microsoft’s Ignite conference in Chicago. I hope you had as much fun as I did meeting peers, technology leaders, and long-time friends (both personal and professional). I was fortunate to be selected by the Microsoft Office 365 Team to staff the Delve, OneDrive, SharePoint and Office 365 kiosks during the conference. I met many amazing people, all of whom asked excellent questions around every product. Based on those conversations, and what we heard from Microsoft, these are my Top 5 takeaways from Ignite:
The First ISP Backbones Offer ExpressRoute for Office 365
Azure ExpressRoute connectivity offers Office 365 subscribers the ability to connect to a Layer 3 MPLS network via ExpressRoute for Office 365 to internet backbones like AT&T or Equinix. A listing of all Azure ExpressRoute partners is available here.
So what does this mean in layman’s terms? It provides your company with a private, faster, and more secure way to connect your cloud and on-premises applications in both Azure and Office 365. This feature will be generally available in Q3 of 2015, and you can find planning tools here.
Groups in Office 365
A group is a shared workspace for email, conversations, files, and calendar events, where members can conveniently collaborate and quickly get projects done. Groups can be public or private, and are easily managed by GUI or PowerShell for Azure Office 365. Groups will work with all Office 365 products and more in 2015.
Benefits of Using Groups
- Single Definition – Groups are determined by defining Teams
- Self-Service – Enable Instant Groups discovery of information from Outlook, OneDrive, Delve, Calendars and SharePoint Online
- Context & History – Includes versioning, who has joined, base level auditing
- Simple to Manage – Easy to manage, invite, and collaborate
The screenshot above offers a look at the Tahoe Partners Mentors Group, a place where volunteer mentors and mentees can collaborate with other participants in the program. This is an example of a public group that lets others find the group; for private groups, permissions are gradually controlled.
You can find more information on groups at Office 365’s support center.
Next Generation SharePoint Hybrid Search
SharePoint Hybrid Search is set to arrive by end of CY2015; some highlights include:
- Index on-premises sites (2010 & 2013) with Office 365
- Office web previews work from on-premises
- File shares can be included
- DLP Sensitive Data Search with Hybrid
Delve offers another option to search for relevant information.
Mobile First: iOS and Android are First Class Citizens
Microsoft CEO Satya Nadella’s mobile-first and support all devices mantra was evident throughout both the Build and Ignite conferences. A noted departure from earlier, Windows-only, messaging, the vast majority of demos showcasing mobile ran on iOS devices. Further indication can be found in the steady release of compelling iOS and Android native apps ahead of Windows platforms, such as the Delve app. In fact, iOS already has Word, Excel, OneNote and PowerPoint.
SharePoint Server 2016 is not the Last Version of SharePoint On-premises
It’s no secret many people are worried that SharePoint as we know it is over. That couldn’t be further from the truth. Julia White, Microsoft GM, and Bill Baer, SharePoint Product Manager, both reinforced previous statements we’ve read and heard: that Microsoft will create SharePoint Server as long as customers want it. However, you can expect to see very little hardware in your office other than your tablet and internet connection. Cloud is here and becoming bigger, faster, more secure and it’s not the future, it’s now. SharePoint Server 2016 is set for public release by Q2 of 2016. For me, the biggest takeaway was the ability to upgrade to 2016 and jump past the 2013 schema changes. Another big win for on-premises is the announcement that Delve is being released for SharePoint Server 2013 on-premises this fall.
How are you going to upgrade your SharePoint?
|
OPCFW_CODE
|
Stephen Foskett, in his recent three-part blog series on "The I/O Blender" clearly describes the problems and potential solutions for managing storage in virtualized environments. At Tintri, we have been designing and building flash-based storage systems solely for virtualized environments for several years. We have thought a great deal about this problem, and were delighted to see that we are not alone in our thinking.
The basic problem, described Stephen, is that virtualization disrupts the flow of information needed by storage systems to efficiently and effectively manage storage for virtual environments. In a physical world, where an application running on a single-server accesses dedicated LUNs, the storage system can easily infer and optimize for application behavior based on the configuration and access patterns to the LUNs. Virtualization, however, disrupts this entire process by running many applications in virtual machines on a single physical server and blending all the IO requests from these applications into a single IO stream, often to a single LUN.
As a result, storage sees what looks like random IOs and can no longer infer or optimize for each application. More than this, all the storage management operations such as quality-of-service, snapshots, cloning and replication, which operate at the level of LUNs and volumes, can no longer be used to tune and manage virtualized applications. In effect, just like Superman in the face of kryptonite, admins are unable to use the advanced storage management capabilities that make enterprise storage so valuable in virtualized environments.
Server virtualization is a big problem for conventional enterprise storage arrays. It reduces or eliminates the value of the very features customers seek when selecting an array.
Stephen goes on to say that one way out of this bind is to introduce VM-aware storage protocols, such as the vVols protocol proposed by VMware, designed specifically for managing and accessing storage in virtualized environments. We wholeheartedly agree that this is a step in the right direction and are happy to see that VMware is taking steps to address it. In fact, one could consider VAAI (vStorage APIs for Array Integration) as the first step. As Stephen notes, however, VAAI improves the efficiency of storage operations in VMware environments but not the intelligence of storage. That is, storage systems still have no way to demultiplex the blended IO stream into its constituent parts, and you still cannot use basic storage management operations such as snapshots on a per-VM basis: Hence the need for the vVols protocol.
Of course, vVols will not be built in a day. VMware still needs to specify the protocol and get buy-in from storage vendors. Then vendors must implement the protocol in an efficient and effective manner. This will be no simple matter and there will likely be huge differences in the quality of implementation and interoperability between vendors. Stephen notes that not every storage vendor will be able to effectively support the vVol protocol and that "this promises to be a very tricky implementation indeed."
Tintri was early to recognize the severity of storage management problems in virtualized environments. By leveraging existing hypervisor management interfaces, we have implemented VM-aware storage management features without waiting for a VM-aware storage protocol. And although our storage is hypervisor-agnostic , we are looking forward to providing the most efficient and effective implementation of the vVols protocol.
As an industry, we have come a long way without VM-aware storage, but we have now reached a point where storage is the dominant cost in deploying and managing virtual infrastructures, and it is evident that the traditional approach to storage management based on LUNs and volumes is becoming less and less effective. VM-aware storage not only allows us to restore the effectiveness of storage management in virtualized environments, but also makes storage easier to use and more powerful than in physical environments.
Unique control with VM-level actions for infrastructure functions including snapshots, replication and QoS make protection and performance certain in production, and accelerate test and development cycles.
|
OPCFW_CODE
|
I want to create a Hyperlink on course ,If you click on the link, it will send an email to a specific person, is this possible?
Moodle isn't really a mail client kind of application, but there are other ways of getting a message to an instructor, for instance. The regular way would be to go to the participant list in a course, open the profile of an instructor and then choosing the Send a message option there.
If you do want to add a mail option, you could just use the 'anchor' button in the text editor and then add a 'mailto:email@example.com' type address instead of an 'https://www.example.com' one. This will trigger a new mail to be opened from the user's email application.
Agree with Joost recommendation to use the Moodle interface rather than a mailto: link. Why? I use a Mac laptop but my mail is on google. Clicking a mailto: link will call up Mac Mail from laptop and I've not configured Mac Mail and don't plan to.
Your users might be using a PC rather than a Mac and they to might be using an IMAP/Web mail kinda setup for their email. Confusing to users ... plus if your mailto: link doesn't work, they will be communicating with you to 'fix it'. :|
Interesting, Rick. But what launches on students computer?
Replace [yourgoogledomain] with your google domain for email.
That launches a browser window, goes to google, where if you are not logged on it prompts to login, then sends user directly to composing and email message.
Tinkering with that some more one might be able to pass the 'To line' addy!
Shadi seemed to be asking for some techniques, and this is one. Yep, what happens if the student doesn't have an email client? The same thing for many many "mailto" links... nothing.
Of course, another technique is to create a "web form" probably including reCaptcha, and have the students directed to this form when they have questions. Yep, a URL resource would work. I don't know if Moodle can handle these kinds of web page forms, but I would put it external to Moodle, like a school web page.
Or provide both approaches.
Once supported a corp moodle that used an app called MachForms - commercial - but not that expensive.
Gave ability to create forms for all sorts of needs and it could be installed inside moodle code directory and not clash with moodle - not even with git updates/upgrades of moodle. It could be configured to drop/strip out all of it's own navigational links so that the form could be embedded in Moodle.
Forms code backed up with the moodle code backups and only a separate DB dump of the DB used for machforms needed.
I don't work for them ... and get nothing for mentioning them.
My 2 cents!
mailto:firstname.lastname@example.org?subject=My Email Test&body=This is only a test!
|
OPCFW_CODE
|
Permanent audio stutter on LGE Nexus 4 with the Deezer Android SDK
Audio playback has been tested successfully on Samsung Galaxy S3 and HTC One, but is severely broken on an LGE Nexus 4 running Android 4.4. What happens is that perfectly fine audio can be heard for a fraction of a second, then there's a few seconds of silence followed by another short piece of audio, then silence, and so it goes. So it seems like the audio streaming logic ends up in an eternal start-play-underrun-stop loop.
Every second or so I see the following warning logged:
12-09 00:55:56.982 10842-14365/com.soundrop.android W/AudioTrack﹕ releaseBuffer() track 0x7b03f4e0 name=s:176;n:2;f:-1 disabled due to previous underrun, restarting
12-09 00:55:57.583 10842-14367/com.soundrop.android W/AudioTrack﹕ releaseBuffer() track 0x7b03f4e0 name=s:176;n:2;f:-1 disabled due to previous underrun, restarting
12-09 00:55:58.594 10842-14369/com.soundrop.android W/AudioTrack﹕ releaseBuffer() track 0x7b03f4e0 name=s:176;n:2;f:-1 disabled due to previous underrun, restarting
12-09 00:55:59.595 10842-14371/com.soundrop.android W/AudioTrack﹕ releaseBuffer() track 0x7b03f4e0 name=s:176;n:2;f:-1 disabled due to previous underrun, restarting
12-09 00:56:02.047 10842-14379/com.soundrop.android W/AudioTrack﹕ releaseBuffer() track 0x7b03f4e0 name=s:176;n:2;f:-1 disabled due to previous underrun, restarting
This lead me to think about possible audio buffer differences between the devices, so I did some probing:
HTC One (good playback): AudioTrack.getMinBufferSize(44100, STEREO, ENCODING_PCM_16BIT) => 16932
LGE Nexus 4 (bad playback): AudioTrack.getMinBufferSize(44100, STEREO, ENCODING_PCM_16BIT) => 7056
My speculation is that the Deezer Android SDK sets a too small buffer size on this particular device, as it seems to pick a buffer size that is 10x that of the reported minimum size.
UPDATE: Just reproduced the audio stutter on an HTC One running 4.4, where getMinBufferSize() return 16932 (like it did on Android < 4.4). So this issue is clearly not device-specific, but related to OS-specific behavioral changes starting with KitKat.
See if you can find an open source example which plays on the new version without stuttering, and see if you can figure out what is different about what they are doing from what you are.
This is indeed an issue raised by the Android 4.4 update. Something in the AudioTrack implementation has changed, and caused these underrun issues.
We're currently working on a fix on this problem and we'll release it as soon as possible.
I had this issue while debugging with a "Method Breakpoint", apparently that really slows down things,
|
STACK_EXCHANGE
|
week 1 lecture 14/10/2019 , user central design; “consider user need, how they think, how they behavior and incorporate a thing that understanding into every aspect of my process”
This two lectures introduced us to understand concepts, such as:user centered design, this includes conducting our own user research, creating Personas and using other methodologies to collect and document our user; this experiment was a visual warm up that helped to concrete the potential methods of user research and design.
The persona and user jounery map a similar visual style aim to hope them look clear , the uses of colour in those design are yellow and black make it more steady; I have explored try different style in my design process collage style, hand draw. I have used a circle high point and low point to represent the customer emotion in my customer journey map
The Persona contains basic information about a woman that often shopping in Winchester, those information from this help me to create other two map.
The Journey map follows a day of research in Winchester, conducted by the team collection data and myself , the goal of this research was aim to find that people shopping in Winchester what problem they have and thier feeling, thinking, and goals.
The empathy map combines the research I get from the persona and maps and combines this information in a visual and useful way. Included user doing, thinking, feeling , demographic, goals, motivate, habits, favorite brands.
The final composition for this project combines the persona and journey map images, displays them in one A2 poster.
People carry out particular activities in particular contexts using particular technologies.
A data collection plan (our group choose WSA bus one to plan)
week 2 lecture 21/10/2019 , user modeling (personas). Empathy mapping(person in middle) well research, enough data like a journey of customer ‘s map. (say, think, do, and feel). use the data we get to do a personas.
A data collection experimentation (our group choose take photo about everybody hand and cuff part)
The results our team have got.
The project 2 tasked for our team was to gather information about shopping in the Winchester – no specifics have been given and it is up to us to decide what information we gather and how. Documentation of this assignment is in below.
- Week 1 and 2 Lectures (Information of User Design Methods)
- Workshop: Something Different
- Design process
Questionnaire develop process
Shopping Research process
Customer behavior observe
Personas in Design Thinking
Personas are fictional characters, which you create based upon your research in order to represent the different user types that might use your service, product, site, or brand in a similar way. Creating persona will help me to understand your users’ needs, experiences, behaviors and goals. Creating personas can help you step out of yourself. It can help you to recognize that different people have different needs and expectations, and it can also help you to identify with the user you’re designing for. Personas make the design task at hand less complex, they guide your ideation processes, and they can help you to achieve the goal of creating a good user experience for your target user group.
In the design think process, designers will often start creating personas during the second phase, the Define phase. In the Define phase, Design Thinkers synthesize their research and findings from the very first phase, the Empathies phase. Using personas is just one method, among others, that can help designers move on to the third phase, the Ideation phase. The personas will be used as a guide for ideation sessions such as Brainstorm, idea, create.
First I want try to create a user profiles(Use collage style)
Journey Map Test
This one I want to express the shop mall in Winchester. and the order is their from their start and end on the street.
I think the test outcome project effects I am not satisfied. Because is hard for audience to look so I continue try to test other style.
Journey Map Test 2
|
OPCFW_CODE
|
I have been using Windows for my personal use for the last 12 years or so and have had Windows 95, Me, XP, and Vista on various desktops and notebooks. My main use is the internet/email and photography. I have been working on digital photos (scanned film or digital cameras) since 1997 so I know very well which features of my current software I use and consider important and which I don't. I am trying to find out if the existing software on Linux has what I want. So far, it looks pretty good, but not perfect of course. :-) If you have any experience using Linux for photography then I would be happy to hear your thoughts.
There are several reasons I am looking at Linux (not necessarily in order of importance):
1. Windows is a huge target for various kinds of malware so there is a constant struggle to keep Windows updated with security fixes. Same for anti-virus software, firewall, etc. I realize that Linux is not immune to malware, but it is much less a target and, at least, in 2009 Linux just doesn't have much of a problem.
2. Although I am fairly happy with Photoshop CS2 the version of ACR it has won't accept my current camera's raw files and will not accept future camera raw files. Of course, I can give Adobe a few hundred dollars and get a new version of Photoshop. Then in a year or two do it again and then again and then again -- each time just because I need a new version of ACR. The story is often similar for other software. Sometimes it is because the version of software that worked on Win95 or WinXP doesn't work right with Vista or there are just new features and/or bug fixes I want. I am not saying that there is anything wrong with companies wanting to make money. Far from it. I am just saying that I like the idea of Linux and OSF so that *I* can get off that merry-go-round. :-)
3. Every Windows computer I have ever owned has needed to have a clean Windows install at least once while I had it, sometimes more. After enough time the computer has started acting funky or some problem has required me to do the clean install. Installing Windows is usually not all that difficult *but* then I have to download and install many megabytes (usually hundreds of megabytes) of Windows updates. Also, when that is all done I have to reconfigure everything. Then the worst part starts. I have to get all my CDs and one by one install all of my applications. Besides the Windows authentication most of the apps also have various authentication procedures. Even some Photoshop plugins have it. This is a real pain in the butt. Then I have to download and install updates for all of them. Then I have to configure all of them. Over the years I have had to do this quite a few times. Assuming I have all the CDs and a fast internet connection and there are no problems (I'll get back to this) then it takes many hours of work. After all of this is done then I have to get all my data files off backup and put them back on the computer.
4. One time while installing Photoshop it just hung part of the way through. After a very long wait I finally rebooted and Windows was hosed. I tried various things and finally did a clean install of Windows. I have had other less serious problems from time to time when installing software. Not often, but a few times.
5. I travel a lot and often spend extended periods in one place or another. Except when I am doing backpack travel I also have my 17" notebook with me (like right now). I also have to carry the Windows CD and app CDs in case I have to go through the nightmare of step 3. This has happened to me. If I didn't have all this stuff then I would really be up the creek.
6. After I went from WinXP to Vista a couple of years ago I discovered that a few Photoshop plugins that worked fine with Photoshop and Paint Shop Pro on XP don't work with Photoshop on Vista -- Photoshop crashes. Strangely, those Photoshop plugins still work fine with PSP on Vista.
7. I also like that Linux has a great software development environment with multiple compilers, assembler, linker, debuggers, etc.. Also, MySQL and lots of other stuff. Who knows, I might try to write a plugin for Gimp or Showfoto or Krita.
In 2002/2003 I did software development on Linux for 15 months. I installed Mandrake Linux on a couple of machines and it was surprisingly easy. Most software I wanted was automatically installed. No authentication, no stacks of app CDs, etc. I never had to reinstall Linux and I don't recall Linux ever crashing. I don't know if it is the same now, but it seems from what I have heard that Linux is still pretty robust and solid. At any time, if I need to, I can download a new Linux. Probably wouldn't need to do that though. Software updates are easy to get. Raw processing software gets updated regularly to handle new cameras. New features and bug fixes in apps are free.
By the way, besides Wine which will allow some Windows programs to be run on Linux -- I think my Picture Window Pro 3.5 is one of them -- I have also found that some Windows Photoshop plugins can be run with Linux Gimp.
I realize that I may find a few things that I would still need Windows for. If so I would have a dual-boot system with probably most of the disk space allocated to Linux.
The Gimp is only partially 16-bits at the moment, but it is supposed to be fully 16-bit in the next major release. There are also two other interesting image editors that I am looking at and they are both 16-bit: Krita and Showfoto. The image management program called digiKam (which also has an integrated version of Showfoto) looks very interesting too. There are several raw photo converters such as ufraw, which can be integrated into Gimp. Also, the free Rawtherapee is available for Linux as is Picasa. Anyway, I am still investigating, but Linux is looking pretty good.
|
OPCFW_CODE
|
One of the toughest aspects of learning Kubernetes is wrapping your mind around how services and internal containers are exposed to the outside world. There are a number of ways to do this and each has pros and cons, but there are definitely ways that are recommended for production environments. Using a Kubernetes Loadbalancer is one of those. MetalLB is a very popular Kubernetes load balancer that many are using in their Kubernetes environments. Let’s take a look at the Kubernetes install MetalLB load balancer process and see what steps are involved to install the solution and test it out.
What is a Kubernetes Loadbalancer?
Traffic from the “external” load balancer in a public cloud environment directs traffic to the backend pods. The cloud provider decides how it is load balanced. In itself, Kubernetes does not offer a built-in network load balancer for bare-metal clusters. While Kubernetes does support implementations of network load balancers via what is called “glue code,” it calls out to the public cloud environments such as AWS, Azure, and GCP. This is great if you are running your Kubernetes clusters in the cloud. However, for those with bare-metal clusters on their own hardware, this leaves only the NodePort and ExternalIPs to expose their Kubernetes services.
MetalLB provides a bare-metal load balancer
MetalLB is a freely available, open-source solution that addresses the problem described above with Kubernetes load balancers for bare-metal clusters. Even though it is open-source and free, many are using it in production and have had great success in doing so.
It offers a solution to offer a network loa balancer implementation that integrates with standard networking environments where bare-metal Kubernetes clusters are found. The implementation is straightforward and is meant to “just work.”
The requirements for running MetalLB in your Kubernetes cluster are the following:
- A Kubernetes cluster, running Kubernetes 1.13.0 or later
- No other network load-balancing functionality enabled
- A cluster network configuration that can coexist with MetalLB
- Some IPv4 addresses for MetalLB to hand out
- When using the BGP operating mode, you will need one or more routers capable of speaking BGP
- When using the L2 operating mode, traffic on port 7946 (TCP & UDP, other ports can be configured) must be allowed between nodes, as required by members
For my testing and labbing, I am running a bare-metal Kubernetes cluster using Rancher on top of VMware vSphere. It uses a Ubuntu cloud image as the Kubernetes hosts. Read the following relevant posts covering these topics:
- Rancher Node Template VMware ESXi – Ubuntu Cloud Image
- Create Kubernetes Cluster with Rancher and VMware vSphere
Kubernetes Install MetalLB Loadbalancer
To begin with, I am installing MetalLB using the Manifests approach. To install MetalLB using Kubernetes manifest, use the following lines. I am simply following the installation documentation found here.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
Create a Config Map for MetalLB
Once you have deployed MetalLB, you need to follow the documentation to deploy a Config Map. The config map is what determines the MetalLB network configuration and what IPs it hands out to services.
Below is simply the code copied from the documentation here. The only thing I am changing is the addresses section to match my local network. Paste the code into a temporary YAML file you can stick somewhere.
After you have the YAML file created and ready, we can deploy it using:
kubectl create -f /tmp/metallb.yaml
Testing your MetalLB configuration deploying Nginx
Now that we have installed MetalLB and created the config map for the network configuration it will hand out, we should be able to test that MetalLB works correctly. Let’s use an Nginx container deployment to test the handing out of IP addresses from MetalLB.
To deploy a test Nginx pod, you can use the following command:
kubectl create deploy nginx --image nginx:latest
You can then look at the deployment with:
kubectl get all
Exposing the Nginx deployment with type LoadBalancer
Now that we have deployed an Nginx test pod, we can expose the deployment using the type LoadBalancer.
kubectl expose deploy nginx --port 80 --type LoadBalancer
Using the kubectl get svc command, we can see the External IP is correctly assigned from the MetalLB IP pool. ***Note*** I will save you some time in troubleshooting an issue that really isn’t an issue. You won’t be able to ping the address handed out by MetalLB. I know I spent a few minutes trying to ping the address and it did not respond, making me think there was an issue. However, ICMP is not enabled for the IP address handed out for your deployment or at least this is the behavior in my lab.
Even though we look to have an IP address assigned from MetalLB, can we actually connect? It is a good idea to test end-to-end. Success! We can get to our Nginx deployment using the IP address assigned from MetalLB.
Kubernetes Install MetalLB Loadbalancer FAQs
- What is a Kubernetes Load balancer? A load balancer handles the automatic configuration of network addresses for your Kubernetes deployments and configures the network layer so that incoming traffic is able to reach your deployment running in your Kubernetes cluster.
- What is MetalLB? MetalLB is an open-source Kubernetes bare-metal load balancer solution that provides an in-the-box load balancer for your Kubernetes deployments. It is free to download and easy to configure with just an easy config map deployment.
- Why do you need to expose Kubernetes deployments? When you deploy services in your Kubernetes cluster, these are not reachable by default. You need to use NodePort, ClusterIP, or a Load Balancer to expose the services where they are reachable from the outside world. Otherwise, they will be on an internal island within your Kubernetes cluster.
- Kubernetes ingress vs load balancer? – An ingress controller like Traefik only handles Layer 7 application traffic. It does not take care of lower-level network connectivity. For that, you need a load balancer.
I hope this post covering Kubernetes Install MetalLB Loadbalancer and the process to do that, including testing, will help anyone who wants to learn more about MetalLB. MetalLB is a great way to handle Kubernetes load balancing. It is free to use and open-source. Many use it in production and have a great deal of success doing so. As always, keep learning and labbing.
|
OPCFW_CODE
|
AngularJS - How to pass name of NgModel as parameter in a function then access it with $scope
So, I am listening to the NgKeyup event, which fires a function that receives the current NgModel as follows:
<input ng-model="__name" ng-keyup="filterValue(this.__name, 'stringMax100')" type="text" name="unit-income-name" class="form-control" id="unit-income-name" maxlength="100" required>
this.__name is equivalent to the $scope.__name (as ng-keyup is an event from angular, this is the $scope)
Once, my function returns an error it enables a flag that shows an error. In this case, I know what the name of the NgModel is, but what if I don't know it?
I would like to pass the name of the NgModel as parameter and the evaluate it within the function, this is my idea:
<input ng-model="__somethingElse1" ng-keyup="filterValue('__somethingElse1', 'stringMax100')" type="text">
<input ng-model="__somethingElse2" ng-keyup="filterValue('__somethingElse2', 'stringMax100')" type="text">
<input ng-model="__somethingElse3" ng-keyup="filterValue('__somethingElse3', 'stringMax100')" type="text">
And from the code:
$scope.filterValue = function(ngModelName, type, $event){
$scope.eval(ngModelName)
// Or
eval($scope.ngModelName)
};
Or something like that, I am using "eval" as an example.
Thanks everyone!
Use the scope's $eval method:
$scope.filterValue = function(ngModelName, type, $event){
//works for simple names
//$scope[ngModelName];
//works for complex names, '$ctrl.x', 'x[$index]', etc.
console.log($scope.$eval(ngModelName));
};
For more information, see AngularJS $rootScope.scope API Reference - $eval
I got the solution, in my case it would be:
$scope.filterValue = function(ngModelName, type, $event){
$scope[ngModelName]
};
Maybe your real case is different but I don't see a need of evaluating the name in your example code.
<input ng-model="__name" ng-keyup="filterValue(__name, 'stringMax100')" type="text" name="unit-income-name" class="form-control" id="unit-income-name" maxlength="100" required>
$scope.filterValue = function(model, type, $event) {
console.log(model);
};
In my case, I don't know always the name of the ng-model as they are dynamically created, so I need to dynamically pass it. But thank you!
It doesn't matter I think. You always need to refer your ng-model by some variable, and you pass that same variable into the function
I actually tried it, didn't work. It's a little more complicated because of the way of how its created. Thanks a lot anyway.
|
STACK_EXCHANGE
|
Hello and welcome to PCTechBytes. We have been providing free computer repair help online since 2003. Feel free to browse the forums, but when you're ready to ask a question or comment, please be sure to register.
We pre-purchased and pre-installed the game Shadow of War on our gaming PC through Steam this week only to discover it would not launch when we clicked Play. I wanted to post our fix here in case anyone else searching the Web for clues to this needs help.
Tried uninstalling and reinstalling the game (all 100gigs worth).
Tried rolling back the nVidia graphics driver.
Tried cursing (made me feel better but did not fix the issue).
Scanned and verified the game from within Steam
Ran sfc /scannow from the command line to make sure Windows was OK.
I ultimately realized the issue was with Visual C++ Redistributable
I had multiple version installed. Some we very old. I removed them all and installed Visual C++ Redistributable for Visual Studio 2015 from here and that fixed it. https://www.microsoft.com/en-us/download/details.aspx?id=48145
If anyone else had problems and found different solutions, let us know.
I have had this laptop for almost 2 years and no problems. Charges fine but yesterday is just stopped working! I plugged it into the charger port and it wont turn on. Charged fine before yesterday. If it helps, its a hp stream 14 and I use a third party charger aswell as the stock charger that comes with it. I am aware of the trick of removing the battery and turning on the laptop, but you have to unscrew the bottom of the laptop to get access to that battery, but would the fix still work?
Ok so this happen yesterday,
I was using AVG to clean out my PC to play games with a higher fps. I selected Hard Drive scan to check for any errors. It restarted my PC and started the scan. It sat for 11% for literally 5 hours, so I decided to turn it off and continue my PC tune up. I also stopped the superfetch service. (as I heard it makes for better performance) Now here's when things get bad, I was reading a tutorial on faster gaming experience, and it suggested the tool, GPU Tweak. Not knowing much about it at all I downloaded it and ran it and it asked to restart my computer. I did, and it never booted back up again. I turn on the power and after about 5 seconds the screen shows a slightly lighter shade of black for about 5 seconds, then goes back to black an sometimes even turning off. Even with the HDD and RAM removed same result. I know this is probably my fault, but is there any to fix this and keep my data?
Intel i5 is what I have.
To preface, this is going to make me sound like a total noob but bear with me here.
I have an Asus laptop running on Windows 10, and I've had it since 2014. I don't know the official name of it or anything as it was given to me by a family member with little to no information.
It's been fine up until today. It charged through the night, I unplugged it when I left for school, etc etc. I noticed out of the corner of my eye that the battery light was flashing orange and green (it's happened before but I brushed it off because a turn on-turn off worked to fix it). I turned it on, and a black screen came up with “Scanning and repairing drive (C:)”, that eventually finished and it started up like normal. The battery light still was flashing orange and green. I was just kinda like “Eh whatever I have a paper to write I'll deal with it later."
Later came and I noticed that the actual battery icon (on the taskbar) wasn't depleting. It stayed at 96% for about 3 or so hours. I went and looked in the settings and it was still the same. So I turned it off (through the start menu) and now it's off completely, but the battery light is now green with it not being plugged in.
I asked a tech buddy of mine about it and he said it's a drive problem and that my laptop is ready to be taken out to pasture. I'm trying to avoid that at all costs because it's how I do pretty much all of my school things and I don't make enough money to buy a new one right now (also would not want to lose all my files).
I don't know if any of this made sense; but I hope so. I apologise for any errors as I'm writing this on my phone. Any help is appreciated though!
|
OPCFW_CODE
|
How to add changes for NO ENCRYPT DB2 option to db2RestoreStruct
I am trying to restore encrypted DB to non-encryped DB. I made changes by setting piDbEncOpts to SQL_ENCRYPT_DB_NO but still restore is being failed. Is there db2 sample code is there where I can check how to set "NO Encrypt" option in DB2. I am adding with below code snippet.
db2RestoreStruct->piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO
Please provide more details, e.g., which version and platform, how the database was encrypted, how you run the command, etc.
take a look: https://www.ibm.com/support/pages/how-can-i-decrypt-encrypted-db2-database you need to add "NO ENCRYPT" option to you restore command
@data_henrik, I am running a command through C code that takes backup of the encrypted database. I can create an encrypted database through "db2 create db mydb encrypt". I can also restore through the command line but whenever I am doing restore through db2 's restore API, it got failed. I am using linux RHEL 7 and DB2 version is 11.5. I am looking for an option from db2 API to restore the encrypted database to non-encrypted.
@mshabou, Thanks! I can do it through the command line but I am looking for option from db2 API to restore the encrypted database to non-encrypted where I might need to use restore API https://www.ibm.com/docs/en/db2/11.5?topic=apis-db2restore-restore-database-table-space .
@CPPDevelop please edit your question to show your 'C' code in particular how you initialize the sqleDbEncryptionOptions structure as pointed to from the db2RestoreStruct structure. Learn now to properly ask a question by giving all relevant facts. Otherwise your question will get closed for lack of detail.
@CPPDevelop , it would be helpful if you would properly ( i.e. fully ) show your code, as your single line update is ambiguous. If both structs are on the stack, try restoreStruct.piDbEncOpts = & sqleDbEncryptionOptions; restoreStruct.piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO ;
Yes, I can not add more code. My only doubt is that by setting options like piDbEncOpts->encryptDb = SQL_ENCRYPT_DB_NO should work for restore of encrypted database to non encrypted database through restore API or is there any way of adding C code except setting of SQL_ENCRYPT_DB_NO?
The 'C' API named db2Restore will restore an encrypted-image to a unencrypted database , when used correctly.
You can use a modified version of IBM's samples files: dbrestore.sqc and related files, to see how to do it.
Depending on your 'C' compiler version and settings you might get a lot of warnings from IBM's code, because IBM does not appear to maintain the code of their samples as the years pass. However, you do not need to run IBM's sample code, you can study it to understand how to fix your own C code.
If installed, the samples component must match your Db2-server version+fixpack , and you must use the C include files that come with your Db2-server version+fixpack to get the relevant definitions.
The modifications to IBM's samples code include:
When using the db2Restore API ensure its first argument has a value that is compatible with your server Db2-version-and-fixpack to access the required functionality. If you specify the wrong version number for the first argument, for example a version of Db2 that did not support this functionality, then the API will fail. For example, on my Db2-LUW v<IP_ADDRESS>, I used the predefined db2Version1113 , like this:
db2Restore(db2Version1113, &restoreStruct, &sqlca);
When setting the restore iOptions field: enable the flag DB2RESTORE_NOENCRYPT, for example, in IBM's example, include the additional flag: restoreStruct.iOptions = DB2RESTORE_OFFLINE | DB2RESTORE_DB | DB2RESTORE_NODATALINK | DB2RESTORE_NOROLLFWD | DB2RESTORE_NOENCRYPT;
Ensure the restoredDbAlias differs from the encrypted-backup alias name.
I tested with Db2 v<IP_ADDRESS> (db2Version1113 in the API) with gcc 9.3.
I also tested with Db2 v11.5 (db2Version11500 in the API) with gcc 9.3.
|
STACK_EXCHANGE
|
Http Connection Error
These response codes are applicable to any request method. 500 Internal Server Error A generic error message, given when an unexpected condition was encountered and no more specific message is suitable. IETF. The Continue protocol allows an HTTP 1.1 client to send a small, specially configured message asking the server to reply with a 100 code, then wait for the response before sending sec.10.2.1. this content
This reset will erase your input buffers. Generally, this is a temporary state. 504 Gateway Timeout The server was acting as a gateway or proxy and did not receive a timely response from the upstream server. 505 HTTP Stack Overflow. time_to_live Optional, number See table 1 for details. https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
Targets, options, and payload for downstream HTTP messages (JSON). Java is a registered trademark of Oracle and/or its affiliates. Note: previous versions of this specification recommended a maximum of five redirections. Internet Information Services The Internet Information Services expands the 4xx error space to signal errors with the client's request. 440 Login Timeout The client's session has expired and must log in
HTTP Working Group. ^ "Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content, Section 6.4.7 307 Temporary Redirect". Http Status Codes Cheat Sheet If it is sent via GCM connection server, it would be represented as key value dictionary in AppDelegate application:didReceiveRemoteNotification:. Click the certificate. my site Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user
Http Status Codes Cheat Sheet
Document Tags and Contributors Tags: HTTP Status codes Contributors to this page: fscholz, sivasain, arulnithi, rctgamer3, groovecoder, dovgart, Sheppy, fusionchess Last updated by: fscholz, Jul 28, 2016, 6:30:22 AM See also http://stackoverflow.com/questions/16160290/what-status-code-should-i-return-for-a-connection-error https://tools.ietf.org/html/rfc2616#section-10.2.1. Http 418 registration_ids String array This parameter specifies a list of devices (registration tokens, or IDs) receiving a multicast message. Http Response Example It was used in a previous version of the HTTP 1.1 specification. 307 Temporary Redirect Server sent this response to directing client to get requested resource to another URI with same
Therefore, this same URI should be used by the client in future requests. 303 See Other Server sent this response to directing client to get requested resource to another URI with news Device Message Rate Exceeded 200 + error: DeviceMessageRate Exceeded The rate of messages to a particular device is too high. There are a few configurations you need to make in order to enable HTTP fallback, which can be done either through the server GUI or by editing aspera.conf. Retrieved 16 October 2015. ^ "407". Http Code 403
The default value is false. Unregistered Device 200 + error:NotRegistered An existing registration token may cease to be valid in a number of scenarios, including: If the client app unregisters with GCM. Stack Overflow. http://upintheaether.com/http-code/http-600-error.php When you finally get to reading data, you will get a connection reset by peer error, and the buffered, unread response data will be lost, even though much of it successfully
The implication is that this is a temporary condition which will be alleviated after some delay. Http 502 Interpreting a downstream message response The app server should evaluate both the message response header and the body to interpret the message response sent from GCM. GCM users are strongly recommended to upgrade to FCM, in order to benefit from new FCM features today and in the future.
The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields. 10.2.6 205 Reset Content The server has fulfilled the
Probably, new URI would be given in the response. 302 Found This response code means that URI of requested resource has been changed temporarily. See Notification payload support for detail. This response is used much more since some browsers, like Chrome or IE9, use HTTP preconnection mechanisms to speed up surfing (see bug 881804, which tracks the future implementation of such a Http 504 httpstatus.
Logging and Usage Tracking VI. For More Information Close HTTP: The Definitive Guide by David Gourley... The response MAY include new or updated metainformation in the form of entity-headers, which if present SHOULD be associated with the requested variant. http://upintheaether.com/http-code/http-error-602.php Unless otherwise stated, the status code is part of the HTTP/1.1 standard (RFC 7231). The Internet Assigned Numbers Authority (IANA) maintains the official registry of HTTP status codes. Microsoft IIS sometimes
Internal Server Error 500 or 200 + error:InternalServerError The server encountered an error while trying to process the request. nginx 1.9.5 source code. The only methods that servers are required to support (and therefore that must not return this code) are GET and HEAD. 502 Bad Gateway This error response means that the server, while Indicates that the request could not be parsed as JSON, or it contained invalid fields (for instance, passing a string where a number was expected).
Retrieved June 12, 2014. ^ "Reference of method redirect_to in Ruby Web Framework "Ruby on Rails". Computer turns on but no signal in monitor Clarified butter for gumbo roux My fears and resentment about my supervisor How should I adress (grammatically) a referee whose gender is unknown? On Chrome, currently not supported.delay_while_idle Deprecated Effective Nov 15th 2016 Optional, JSON boolean This parameter is deprecated. Originally meant "Subsequent requests should use the specified proxy." 307 Temporary Redirect (since HTTP/1.1) In this case, the request should be repeated with another URI; however, future requests should still use
The response MUST include the following header fields: - Either a Content-Range header field (section 14.16) indicating the range included with this response, or a multipart/byteranges Content-Type including Content-Range fields for httpstatus. Values in string types are recommended. The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote
The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. On Windows, you must ensure that a range of UDP ports is not being blocked by a firewall. Downstream message error response codes The following table lists the error response codes for downstream messages. This can be sent by a server that is not configured to produce responses for the combination of scheme and authority that are included in the request URI. 426 Upgrade Required
Table 7. This issue is more subtle than many developers first realize, and little has been written on the subject. "At Will" DisconnectionAny HTTP client, server, or proxy can close a TCP transport These status codes are applicable to any request method. When a message is sent with high priority, it is sent immediately, and the app can wake a sleeping device and open a network connection to your server.
|
OPCFW_CODE
|
Automatic bill payment - direct deposit or automatic checks through mail?
I have a personal account with BOA and I'm curious how I should set up my automatic payments (to utilities, landlord and friends). Should I
A) go directly to the companies (that have to pay) and tell them to draft from my account
B) tell BOA to send them a check every month via Bill Pay
C) tell BOA to direct deposit to their account (not always possible)
Are there tradeoffs between these or does it not matter?
A) go directly to the companies (that have to pay) and tell them to draft from my account
Make sure you know how to stop this feature, so that they don't keep pulling from your bank account. Some people use an automatic billing to a bank account or credit card to pay their gym membership, then forget to turn it off even after they stop going.
Best for bills that don't change very often, that way there are no surprises.
As long as you have money in the account, it will go through.
B) tell BOA to send them a check every month via Bill Pay
Bills that can be paid electronically work the best, because the money gets to the company in a day or two, instead of a week.
they can handle almost any company or individual by sending a check by mail. But it takes longer.
The money comes out when the bank sends the check, not when the company cashes the check.
Have to get proof of payment on the company website, or when the company sends a receipt. there will be no scanned check on the banking website.
C) tell BOA to direct deposit to their account (not always possible)
I have used this only for mortgage payments to the same bank.
There is 4th option. If the company can handle an automatic draft from your bank, they might be able to automatically charge a credit card. That means only one payment from the bank each month to pay the credit card bill. This takes dedication to not skip a payment and owe a lot of interest. You can also earn cash back, or miles for these transactions.
A utility bill can sometimes be put on a level payment plan. They estimate your bills for the year based on past usage. They charge the same amount for 6 or 12 months, then they adjust to reflect actual usage and repeat the cycle.
Re: your first point to B), do you mean to say that there are two ways that B) can be done: electronic check (a day or two) vs. physical check (a week)? If so, what is the difference between C) and electronic check?
At my credit Union if I want to use bill payment I provide the company name, address, phone number, and my account number or invoice number. If it is a large company the payment is sent electronically, if it is a small company or person it is sent via mail. I don't decide which way the money is sent, it happens behind the scenes. I also don't know the company bank name or bank account number. Under Option C I would have to know the bank routing number and account number.
What @mhoran_psprep says about the bank/FCU controlling whether the bill pay is electronic or paper check comports with my experience. You can typically tell the difference by the gap between the "send on" and "deliver by" dates on the bank's bill payment page for the payee. For my credit union, electronic is 3 business days; for paper it's 5.
Be aware of the following issues amongst these:
I'm presuming this is sending the companies a voided cheque that allows them to make a pre-authorized debit on your account. The key point here is that if the amount should fluctuate like some utilities may, then the amount coming out will vary and you have to maintain enough of a balance so that you don't bounce the payment. If you ever run your account to a low balance, this could be dangerous if you mis-account for things.
This gives you some control in that you pay when you want and thus can ensure the funds are in the account. I'm presuming this is the same as doing a bill pay through PC banking which is a bit different from physical checks. This would be my preference as I like knowing things will work though I do a pre-authorized debit for a few things like monthly bank fees and insurance.
I'm not sure I can easily imagine this construct enough to see where it goes.
Some bills like rent may be best done through post-dated checks since the amount isn't likely to fluctuate and thus can make sense. However, most banks will charge for checks and thus you may have to order checks at a point.
|
STACK_EXCHANGE
|
Header icon_links show in separate rows
This was great for a while -- but it no longer works (not sure when it broke. The latest 1.1.3 has this issue in both Sphinx 7 and 8). Seems that it has to do with .article-header-buttons class that's showing on separate rows.
^ The order gets jumbled if I add even 1 icon_links within html_theme_options
"icon_links": [
{
"name": "API Docs",
"url": (
"https://foo.com"
),
"icon": "fa-solid fa-book-open",
"attributes": {"target": "_self"},
},
{
"name": "Discord",
"url": "https://discord.gg/foo",
"icon": "fa-brands fa-discord",
"attributes": {"target": "_blank"},
},
],
Originally posted by @dylanh724 in https://github.com/executablebooks/sphinx-book-theme/issues/833#issuecomment-2497911154
Yep, that was it!
I added this to my arbitrary .css and it's back!
Please keep this issue open, as I'm pretty curious how this could've happened. I even checked out an old tag -> built -> and it had these issues using the same ver. It was as an update was force-pushed in.
Additionally, I'm having other random CSS issues such as font-weight and a couple others that I suspect is due to the same source cause, but I can't explain it.
Another noticeable change is this one with the 3rd-column: The vert space is larger
If I inspect the element and snag a value, I can find it here at _toc_inpage.scss
Adding
h3 {
font-size: 125%;
font-weight: normal;
}
worked around the other issue.
Do you mind writing here exactly what you added to your CSS styles pls?
For the icons:
https://github.com/executablebooks/sphinx-book-theme/issues/879#issuecomment-2497991793
For the font weight:
https://github.com/executablebooks/sphinx-book-theme/issues/879#issuecomment-2498048554
I didn't get to the vertically stretched 3rd column text
Worked around the 3rd column vertical stretching, too. Here's everything combined:
/*
SPHINX-BOOK-THEME HOTFIX (top-right icons):
https://github.com/executablebooks/sphinx-book-theme/issues/879
*/
.header-article__inner .header-article-items__end, .header-article__inner .header-article-items__start {
align-items: start;
display: flex;
gap: .5rem;
}
/*
SPHINX-BOOK-THEME HOTFIX (right column vertical stretching fix):
https://github.com/executablebooks/sphinx-book-theme/issues/879
*/
.toc-entry a.nav-link {
padding: .125rem 0 .125rem 1rem;
}
/*
h3 font weight fix (from extra heavy to normal):
https://github.com/executablebooks/sphinx-book-theme/issues/879
*/
h3 {
font-size: 125%;
font-weight: normal;
}
I'm still mind-boggled as to how this happened when PyPi says the tagged version hasn't been updated since ~June?
|
GITHUB_ARCHIVE
|
This company allows you to sharpen your creating abilities as you put together with the Analytical Crafting measure in the GRE Standard Examination. Using topics and duties created by ETS — the maker in the take a look at — this World-wide-web-based support permits you to write and submit responses to two essay subjects and have immediate scores and feedback.
Online Exam normally set pupil on problems and worry. It is possible to Look at several products and services available with us to get rid of your stress:-
The most recent official GRE test preparing providing, POWERPREP As well as® Online, will give you the encounter of getting the actual, Laptop-shipped GRE Common Exam and a lot more! Two timed apply tests (offered separately), each featuring a distinct list of thoughts, are available to help you prepare in your house, in school or wherever you've internet access.
The teacher or study course builder produces an account having an exam builder. In this sort of an exam method you may make concerns and insert them on the exam.
And Even though you can entry Google specific subjects like Accounting – Journal entries, hard cash move diagrams, income statements usually are not alternatives that even Google can help you save from solving. Well that is quite a miniscule time bomb you would probably never plan to get involved in, would you? No. In this article, at Assignment Research, we assure to revolutionise your student knowledge by providing exam help online. Before the outstanding issues of a troublesome exam stop working your spirit to progress, seize it.
Immediately after analysing every single element in the exam provided by the coed, our specialists begin to design and style a training application. Our Online Exam Help is among a kind service which helps learners to see much better means of scoring very good marks and become accustomed to This technique.
Everybody who demands an exam being taken by a gaggle of scholars. Our shoppers range from educational facilities and academics to businesses.
Educational Assignments is actually a company of online exams find out here assist. We exist to serve the education and learning and tests marketplaces and Tutorial Assignments is very pleased to assistance you in the Upcoming exam and crack it very easily and obtain excellent grades. Academic Assignments provide examination help for pursuing sort of exams
Should you be in disaster or considering suicide, please click the Chat Now button to talk ... We offer help and hope by way of online disaster chat, university campus and ...
Just contact Educational Assignments for assistance; our Associates will counsel you in next page selecting the right assistance. For those who have any question please feel free to Get hold of Tutorial Assignments, You can even hat now With all the aid of right here our Reside chat choice, our consultant might be there for yourself. Training course Subjects Help
Online exams are most vital Portion of the online courses. Pupils usually lookout for very best online exam help so which they can achieve great grades. Exams work as burdened get the job done for students and also a benchmark to recuperate edification. This can be an indispensable factor for faculty and University pupils in examining their primary skills. Students always have to have professionals advice to receive online exam help.
Occasionally learners will not get sufficient time to study for their online examination in order that they look for online take a look at help so which they can carry out perfectly in from this source it to obtain excellent grades of their tutorial careers.
With online examination students can perform the exam online, in their own individual time and with their particular unit, No matter in which they lifetime. You online require a browser and Connection to the internet.
“I trustworthy This web site, obtained very good grades for my belief. They have got several of the ideal online exam helpers. Many thanks for furnishing awesome online exam help.”
If you need any help associated with online test help or online quiz help for virtually any from the topics, make sure you post your prerequisites in this article. You can read more details on our Assignment Writing Help expert services listed here.
|
OPCFW_CODE
|
'use strict';
var natural = require('./natural');
var character = require('./character');
var pick = require('./lib/pickOne');
var bool = require('./bool');
var postcodeAreas = require('./lib/postcodeAreas');
/**
* Return a random UK Postal code value.
*
* @returns {string} - A random UK Postal code value
*
* @example
* postcode(); => 'W6 9PF'
*/
module.exports = function () {
// Area
var area = pick(postcodeAreas).code;
// District
var district = natural({ max: 9 });
// Sub-District
var subDistrict = bool() ? character({ alpha: true, casing: 'upper' }) : '';
// Outward Code
var outward = area + district + subDistrict;
// Sector
var sector = natural({ max: 9 });
// Unit
var unit = character({ alpha: true, casing: 'upper' }) + character({ alpha: true, casing: 'upper' });
// Inward Code
var inward = sector + unit;
return outward + ' ' + inward;
};
|
STACK_EDU
|
2009-08-17, 09:00 PM (ISO 8601)
Firbolg in the Playground
[3e] Taking Creative Build Requests
I'm bored, and in the mood to fiddle with the rules, so I'm taking build requests. These can be things you actually want to play, ideas that have been bouncing around your head that you don't know how to make work, or just challenges to test my ingenuity. I don't mind if other people give their own answers to challenges before or after I do.
What you give me
I'll need a brief description of the type of role you want the character to fill, any special requirements, and the general level range you want the character for. Do not mention the name of any class, or any particular known trick, just the idea you're going for. I'll accept PrC requests, but will interpret them liberally - you'll get something that fills the same archetype, but might handle it differently.
What I give you
A way to creatively fill that role that (in my opinion) is powerful, unique, and playable in a normal game. I won't give you anything I wouldn't allow as DM, but I may make minor houserules (if so, I'll state them and the justification). My emphasis will be on creativity, rather than literalism. What you get will not be the definitive example of its class, it'll be something that competes in the same category with (I hope) more style. I won't be making full character sheets, but I'll give you a firm basis that works.
You: "Hey Mr Zeal Person, I want an archer who can deal huge damage and peg off enemy leaders!"
Me: "Why certainly! Let's just grab you a level 7 Wilder. What's that you say, you were expecting a Scout or Ranger? Well, the Wilder gets Crystal Shard, and let's boost it up with Empower Power, and Metapower for that extra little edge. Hey presto, your Crystal Shard does, effectively, 10.5d6 as a ranged touch attack! Even better, let's surge to bring that up to, effectively, 15d6! And it's a ranged touch attack, so good luck anyone trying to dodge it! And then you can start taking archery feats like Psionic Shot! Hey presto, your archer does what an archer should without all the mucking about with silly things like arrows or bolts!"
You: "....except hit something more than 40 feet away."
Me: "Well, there's that. But you can't have everything, y'know? And you've still got a bunch of feats and psi powers to play around with."
Concept: Archer, single target DPS
Race/Class: Elan Wilder 7
Necessary components: Crystal Shard power, Empower Power feat, Metapower feat.
Result: Heavy single-target damage, does it as a ranged touch attack for reliability, scales very well with level (39d6 at lvl 20, without any further effort at all).
Last edited by sonofzeal; 2009-08-17 at 09:18 PM.
|
OPCFW_CODE
|
Detect and Reflect a Generic List
We have a project that uses reflection quite a bit. If you use any kind of mapper chances are you too are using reflection. One gotcha with reflection is the name of the properties, unless you make some other infrastructure to support mapping one name to another chances are you won’t get the data you desire.
What is This About
This post is not about creating your own mapper, although it could help. This post is more about a specific problem. I recently had to try to reflect out data for a RestAPI. We did not want to reflect out the entire object, it would be too much data and too large of a response. Plus we want the RestAPI to be as fast as possible. To solve this we decided that part of the call would be to specify the desired fields or return some very basic default fields. This was fairly straight forward until it came to the Generic Lists (List<T>) or ICollection objects. We did not want to reflect out those objects in their entirety as well.
As usual I thought this might have been done before, so I set about searching. StackOverflow is your friend, but it is your friend that knows almost too much. You will find a lot of questions and a lot of answers and while most were partially right I quickly noticed a flaw.
One such answer was to simply look for things being IEnumerable. That will work, but guess what else is IEnumerable. You may have forgotten it, a String. Who wants to reflect out an IEnumerable of type Character?
I created two methods to check for IsGenericList and IsICollection. For the List I used return type.IsGenericType && type.GetGenericTypeDefinition() == typeof(List<>); , for the ICollection I used result = (type.Namespace == “System.Collections.Generic”);
This is where I again fell back to the ExpandoObject and IDictionary<string, object>. These allow you to easily create an object and add properties dynamically at run time.
An Example Project
I realize this is a lot of reading and not much code. So I created a console application to show all of this stuff. The link will take you to a GitHub repository.
The example is very simple, I just wanted something that covered all of this stuff and worked. I have three data objects in my Models folder; Person, Purchase, and Review. The Person object contains a List of both Purchase and Review objects. I created a single person object with the appropriate data to reflect out.
Since it is a console application it reflects out into one long string, but you will get the idea. Most of the work takes place in the DataMapper.cs. You will notice I check for the PropertyType to be null, this could very well happen if someone did not send the proper name over that matches something on your object.
You will also notice a comment in there about working with IDictionary. You cannot have duplicate keys, so you may have to get creative with those keys.
While none of my objects in this example were ICollection, it will basically work like List<T>. There are methods in there to handle it, you just have to do some extra work in the ReflectICollection method. Mainly you have to flesh out the switch statement.
I hope this helped you with reflecting a List<T> or ICollection. It was a fun challenge for me that really enjoyed. Thank you very much for taking the time to read my post.
|
OPCFW_CODE
|
If you're losing hair trying to figure out how to promote your content, this is the video for you! In this content marketing tutorial, Eric Siu shares his tried and true methods for promoting that will be sure to bring your traffic to a whole new level. Learn how to get more views to existing content and work with your new content to maximize initial launches.
►Subscribe to my Channel: http://youtube.com/subscription_center?add_user=gogrowtheverywhere
Want to learn the SEO tactics that AirBnB, Lyft, and Heineken use to drive millions of site visits a month? Download the case study now: https://www.singlegrain.com/res/digital-marketing-agency/case-studies/
Leave some feedback:
• What should I talk about next? Please let me know on Twitter - https://twitter.com/ericosiu or in the comments below.
• Enjoyed this episode? Let me know your thoughts in the comments, and please be sure to subscribe.
Connect with Eric Siu:
• Growth Everywhere Podcast - http://www.growtheverywhere.com/
• Marketing School Podcast - https://www.singlegrain.com/marketing...
• Single Grain - Digital Marketing Agency - https://www.singlegrain.com/
• Twitter https://twitter.com/ericosiu
Full Transcript of The Video
Here we go. The first thing is, make sure that you're creating content. You're consuming or you're watching this content right now, maybe you're listening to my podcast on top of that, which I hope you do. The question is, how do you go about promoting it? Because, a lot of people think they build it, and then they will come. It's not how it works. So I'm going to give you some of the table stake stuff first.
Make sure that you're hitting you're hitting your social media channels. Whether you're using Buffer, Hootsuite, and then using a tool like MeetEdgar to consistently publish the same thing over time. Assuming the content you're publishing is evergreen. Make sure you have some kind of cadence in place to do that, to take care of it. You can use Facebook Ads, and you can just boost your content. It's going to cost you. Maybe you could spend $5 a day, you could spend $1 a day just boosting your content, and you can see how far that's going to take your business.
With Facebook you can target the right people. So you don't necessarily need to be a Facebook Ads expert, but have this framework built in, this process built in that your team can follow so you're getting your content out there, because Facebook is very much a pay to play platform. Today organic reach continues to decrease.
Look for people that have shared similar content. You can use a tool like BuzzSumo. You can just type in content marketing, or you can type in whatever your keyword is. You can find people that have shared similar content. That's going to help you. You can reach out to people that have tweeted that, you can reach out to people that have linked to that, ask them to share it, ask them to perhaps do some kind of collaboration with them as well.
That's the thing. A lot of people also forget about collaboration too. With this YouTube channel, we're growing. We started to do collaborations with people that have similar channel size. Maybe they're two, three, four X bigger, but they're willing to do collaborations, because I might have a relationship with them, or maybe they have something that you want. So maybe you guys have similar sized audiences. You guys can trade, and they'll siphon off a piece of your audience, and then you'll siphon off a piece of their audience. There's a lot of different things you can do from that perspective.
I think it's also really important, the stuff that I'm giving you right now, maybe you might have some ideas of your own, but put them into a checklist that makes everything easy to follow. Make sure that you're repurposing the content as well. So if you Google the content reusage workflow there's literally a framework you can follow. If a content piece is out of date, you could update it. If there is something you can add to it, you can expand it. You can repurpose a video like this into a blog post. You can repurpose it into a podcast. You can repurpose this into maybe a longer webinar, with a bunch of tips combined as well.
So those are just a couple things you can do. I'm just giving you a couple power ups. I'm curious to know what you're going to actually do with these power ups to actually level up your business. If you enjoyed this video just go ahead and hit subscribe, and we'll see you tomorrow.
|
OPCFW_CODE
|
You can have XMPP service running in your own domain. The only condition right now is that this must be a DNS registered domain and DNS must point to the following DNS address: tigase.me. Please note, do not confuse it with tigase.im domain name.
We recommend to use SRV records as this is required by the XMPP specifications but as some DNS services do not allow for SRV records yet we do not require SRV records either. If you want to register: your-domain.tld on our XMPP service make sure that either the command:
$ host your-domain.tld your-domain.tld has address 22.214.171.124
displays 126.96.36.199 address or commands:
$ host -t SRV _xmpp-server._tcp.your-domain.tld _xmpp-server._tcp.your-domain.tld has SRV record 10 0 5269 tigase.me. $ host -t SRV _xmpp-client._tcp.your-domain.tld _xmpp-client._tcp.your-domain.tld has SRV record 10 0 5222 tigase.me.
display tigase.me DNS name. We strongly recommend not to use the IP address directly however, as if the service grows too much, it will be much easier for us to migrate and expand it using the DNS name rather then IP address.
If you want to have MUC and PubSub available under your domain, you have to setup DNS for your muc.your-domain.tld and pubsub.your-domain.tld domains too.
$ host -t SRV _xmpp-server._tcp.muc.your-domain.tld _xmpp-server._tcp.muc.your-domain.tld has SRV record 10 0 5269 tigase.me. $ host -t SRV _xmpp-client._tcp.muc.your-domain.tld _xmpp-client._tcp.muc.your-domain.tld has SRV record 10 0 5222 tigase.me.
For PubSub :
$ host -t SRV _xmpp-server._tcp.pubsub.your-domain.tld _xmpp-server._tcp.pubsub.your-domain.tld has SRV record 10 0 5269 tigase.me. $ host -t SRV _xmpp-client._tcp.pubsub.your-domain.tld _xmpp-client._tcp.pubsub.your-domain.tld has SRV record 10 0 5222 tigase.me.
Now, how do you register your domain with our service?
There are a few ways. We recommend checking with the Add and Manage Domains section of the documentation on setting that up. If you cannot or don’t want to do it on your own, the way described in the guide please send us a message, either via XMPP to email@example.com or the contact form requesting new domain. User registration is available via in-band registration protocol. You can also specify whether you want to allow anonymous authentication to be available for your domain and you can specify maximum number of users for your domain.
Any comments or suggestions are very welcomed.
|
OPCFW_CODE
|
Why should I include AFM files in a font package for CTAN?
On CTAN, AFM files are often provided along with type1 (PFB) and TFM files. I wonder if I should include AFM files in a font package for CTAN at all. I assume that the AFM files are included just in case you use the font with some other programs outside the TeX-world. Is this assumption correct or are there cases where AFM are needed in programs that are provided with a common TeX distribution (e.g. dvips)?
there are so many programs in the tex eco system that I wouldn't dare to claim than none uses afm files. Imho at some time of history context did read them, and luaotfload contains some references, but I don't know if that is still the case. If you have them it can't harm to provide them (and they can help to check details of the font).
@UlrikeFischer The problem is that I have 260*3 MB AFM files since FontForge cannot truncate AFM to the first 256 glyphs. The TFM files are 10 times smaller. I would have to write a special script to optimize the AFM files. But if I won't hear from anybody that the AFM files are necessary I will tend to not include them. I have seen that the stix2-type1 package does not include AFM files neither.
@rallg it is impossible to use OpenType with dvips driver, and pdfTeX cannot read font metrics from OTF or TTF. It could be that OP converted the design to type1 so older TeX drivers can also use it.
TFM files are able to encode only 256 characters per font. Type1 fonts and their AFM format can include unlimited number of characters, the standardized character-names are used in the format.
LuaTeX with luaotfload is able to load Type1 fonts directly without TFM, it reads AFM+PFB files. The standardized names of characters are mapped to Unicode in such case, so user can access all characters of the font, no limited to 256 characters like TFM.
If you are able to generate OTF format then it is much better choice. LuaTeX with luaotfload is able to load these fonts and all characters are accessible too. And all things are more simple and there can be more features ready to use in the font..
When OTF format is present in a font package then Type1 format (AFM+PFB) seems to be obsolete and there is no reason to use such format. Maybe, you want to provide it for users of obsolete TeX engines (like pdftex) where only TFM files are possible when loading the font. And users of such obsolete engines don't need the AFM format.
Note that AFM format includes data of kerning pairs. If it is generated automatically, then almost each character to each character has its kerning pair, much of them unnecessary. If you remove such unnecessary kerning pairs from AFM then you get "normal size" of these files.
Indeed, I will also provide OTF's, so luatex would not need the AFM's.
For the orthodox TeX production chain, Neither DVIPS nor DVIPDF(M)(X) nor pdfTeX need AFM to work, and pdfTeX compatible processor would also work without AFM.
If you are distributing Type1 fonts and the user want to edit the font in a font editor that doesn’t know TeX’s TFM format, they can import the AFM into the editor when editing so the various font metric information can also be updated and after that TFM can be generated from the AFM.
luaotfload can load PFB with AFM in case there is no TFM provided.
If you can provide AFM from a separated download place, or the AFM is actually straight up generated from TFM, AFM might not needed be distributed with font uploaded to CTAN.
Another popular typesetting tool, UNIX troff, needs the AFM to generate font information to use type 1 font.
|
STACK_EXCHANGE
|
Few tests on FF x64 - clean fresh startup takes ~350MB. With curvefever.io webpage loaded (pre-game, page where you can start or download), it is about ~395MB. After game launch to lobby, it jumps to about 880MB, then goes back to about 790MB.
In game, it stays on about 855MB.
I tried more games in a row, but never got over 890MB
just to add, my configuration is: FF x64 50.1.0 with Adblock, Ghostery, Greasmonkey (disabled by default), Firebug (disabled by default) and 4 more minor addons, which are also disabled by default and activates only when nneeded (but probably takes some ram on browser start)
So … 890MB, this is the peak I reached at frash startup … If I use my FF normally, I have like 1400 to 1900 MB, + if I add CF, it can be like 2300 … it is quite lot, but it should’t cause any failure at modern computers with at least 8GB ram and x64 system installed.
//EDIT missed zour UBUNTU sentence … so read the steps, but then continue on next post …
Try now theese stepps:
- Open task manager
(ctrl+alt+delete ->task manager), where you can read the RAM consumed by each process and open browser, half/half screen so you can look on game and on task manager
4)Write down RAM usage of FF at homepage.
- Write down RAM usage on curvefever.io splashscreen page
- Run game, login to lobby and write down RAM usage.
- Play game, write down peak ram usage
- Write down ram usage while the crash happens.
Then … if everything OK
Use your browser as usual. After like hour of using it, when its memory will be one mess, repeat the steps 4-8 with all other tabs open, just add one tab with game (on step 4, write down RAM usage status before opening new tab for game)
Write down the results.
Other important info:
How much RAM do you have installed?
What system you use? (not only win xp/7/8/10, but also a version, x64 or x86 - even I do not think someone will install x86 if he would not have a very good reason for it)
-This you can find in Computer/Properties (right click on Computer icon or in blank white space in file explorer where disks C: D: … are)
How much SWAP do you use?
Now, I will not be 100% accurate, beacuse I use localized version of OS:
Again, Computer/Properties, on the left side in menu links, there is something like “Advanced settings” … it should be the last link.
In the newly showed window, hit the middle tab, where Performance, User profiles and Launch Recovery options are. Hit the button near performance block. Again in new window, hit the middle tab, and dhere, in the bottom is a number, which shows the size of SWAP file, that windows can alocate. Write it here too…
Then, we shall see
|
OPCFW_CODE
|
Batches freeze after creating +- 15 batches of 450 operations each
I have about 50 000+ items that I need to set/update in our Firestore database. I am creating batches of items in order to "update" and commit them to Firestore. I am using the "set" method with merge option to add and update items into the batch e.g.
$batch->set($document, $datum, ['merge' => true]);
Each batch contains 450 set operations. As I am creating these batches, before I have even started to commit them, it stalls/freezes after creating about +-15 batches containing 450 set operations each.
The code works fine, because using the same code with smaller batches, e.g. 5 batches of 450 set operations each, works perfectly fine. As soon I try to create more than 10 batches, it starts failing, with no error. I would like to create about 800~1000 batches. I have also added try catch methods and logging attempts, trying to catch any error, but no error is shown in the logs, it just freezes.
This is my code:
$batches = array_chunk($items, 450);
$batchArr = [];
// create batches and add set operations (it freezes around loop number 15 below)
foreach ($batches as $idx => $data) {
$batch = $db->batch();
foreach ($data as $datum) {
$document = $db->collection('accounts')
->document($accountId)
->collection('items')
->document($datum['itemId']);
$batch->set($document, $datum, ['merge' => true]);
}
array_push($batchArr, $batch);
}
// commit batches
foreach ($batchArr as $batch) {
if (!$batch->isEmpty()) {
$batch->commit();
}
}
Environment details
OS: Ubuntu 22.04
PHP version: 8.1.8
Package name and version:
"google/cloud-firestore": "^1.24",
"kreait/laravel-firebase": "^4.1",
Steps to reproduce
Use the batch method to create batches that can be committed to Firestore.
Add 450 set operations per batch inside a for loop, as shown above.
Run into freeze situation after about 10 - 15 batches of 450 set operation each has been created (not committed yet).
No error is shown anywhere. (I have tried to use try catch method, throwables, sentry, but no errors are shown)
Code example
$batches = array_chunk($items, 450);
$batchArr = [];
// create batches (it freezes around loop number 15 below)
foreach ($batches as $idx => $data) {
$batch = $db->batch();
foreach ($data as $datum) {
$document = $db->collection('accounts')
->document($accountId)
->collection('items')
->document($datum['itemId']);
$batch->set($document, $datum, ['merge' => true]);
}
array_push($batchArr, $batch);
}
// commit batches
foreach ($batchArr as $batch) {
if (!$batch->isEmpty()) {
$batch->commit();
}
}
I have searched open and closed issues in the issue queue. I have also searched stack overflow for similar issues. The only issue that I could find that was related to this one is with the JavaScript Firebase sdk, about 3 years ago, see here https://github.com/firebase/firebase-js-sdk/issues/1437
I have tried multiple different ways of optimising this code, and even added 1 second sleep operations in between set operations, however, I still get the same result.
Also, I am using Laravel and this package "https://github.com/kreait/firebase-php" which is a simple "bridge or wrapper" to the official firebase client. I was redirected here from this thread https://github.com/kreait/firebase-php/issues/773. With Laravel, we would see immediately if it runs out of memory or resources, which, to my knowledge, it does not.
Thanks in advance for your help and guidance!
Just an update here, I also tried using the bulkWriter class, as it seems that the batch function has been depricated.
I upgraded our packages to the following:
"google/cloud-firestore": "^1.27.3",
"kreait/laravel-firebase": "^5.1",
This is the new code:
$batches = array_chunk($inventory, 450);
$batch = $db->bulkWriter();
foreach ($batches as $idx => $data) {
foreach ($data as $datum) {
$document = $db->collection('accounts')
->document($accountId)
->collection('items')
->document($datum['itemId']);
$batch->set($document, $datum, ['merge' => true]);
}
$batch->flush(); // Flushes the enqueued writes in batches with auto-retries
}
Log::debug("Done with batches");
$batch->close();
It runs about 90 batches and then just freezes. So it was able to process more batches than creating single batches manually, however, it still failed and freezes after a while with no error.
@vishwarajanand after further investigation, I increased the memory on the server and that seems to have solved the issue.
However, it might still be something to look in to. The bulkWriter used about 500mb of memory to process all of my batches, a total of about 450 +- batches. It did not release any of the memory until the bulkwriter was closed, even though I was flushing with each batch. So you might run into serious memory issues when you need higher amounts of batches.
Each bach uses 1-2 mb of memory in my case, so let's say you need to process and flush 1000 batches, then you would need to reserve at least 2000mb of memory. Is there another way to scale this operation?
@janpansa Thanks for reporting and narrowing down this issue to memory of the PHP process. Its very helpful.
I wrote a tiny script to get memory usage and peak memory usage and even on 1200 batches it consumed roughly ~80 MB (and ~5 mins). I tested the following code on Linux with phpv8.1.12.
Code to get memory usage for BulkWriter
<?php
require_once __DIR__ . '/vendor/autoload.php';
use Google\Cloud\Firestore\FirestoreClient;
function print_mem($doc_count)
{
/* Currently used memory */
$mem_usage = memory_get_usage();
/* Peak memory usage */
$mem_peak = memory_get_peak_usage();
echo "| $doc_count |".
round($mem_usage / (1024*1024))." | ".
round($mem_peak / (1024*1024))." |". PHP_EOL;
}
function testBulkInsert($batchCount)
{
echo "| Batch Count(20 docs each) | Memory usage (in MB) | Peak memory usage (in MB)".PHP_EOL;
echo "|-------------|---------------------|--------------------|".PHP_EOL;
$client = new FirestoreClient();
$collection = $client->collection(uniqid('bulkwriter-issue-5927'));
$bulkwriter = $client->bulkWriter();
for ($i = 1 ; $i <= $batchCount; $i++){
for ($j = 1 ; $j <= 20; $j++){
$doc = $collection->newDocument();
$bulkwriter->create($doc, [
'foo' => 'bar',
]);
}
// shorten output
if ($i % 100 === 0){
print_mem($i);
}
}
$results = $bulkwriter->flush();
$x = count(
array_filter(
$results['status'],
function($v){return $v['code'] === 0;}
)
);
echo PHP_EOL;
echo "Total $x documents added successfully.".PHP_EOL;
}
$batchCount = 1200;
testBulkInsert($batchCount);
produced following output:
Batch Count(20 docs each)
Memory usage (in MB)
Peak memory usage (in MB)
200
17
17
400
30
30
600
42
43
800
54
55
1000
69
70
1200
80
81
And added 24000 documents added successfully.
Actually by design, BulkWriter needs to cache the results for all operations, so that users can know what operations failed.
Thanks @vishwarajanand , this helps a lot, will keep this as reference when moving forward.
|
GITHUB_ARCHIVE
|
Simple Social Media Stream
Simple Social Media Stream extension is the best choice for those who are looking for an easy way to share their social networking updates on their Joomla ready website.
This extension gives you a combined social media stream for all of your social network updates, and can display them in 3 different layouts: Wall, Timeline, Carousel. It supports 9 and growing social networks including Facebook, Twitter, Pinterest, Instagram, Youtube, Vimeo, RSS, SoundCloud, VK and LinkedIn and includes about 21 feeds options.
Your visitors will be able to share your posts on Facebook, Twitter or Linkedin from your website. They will also have the option to reply, retweet or favorite the Twitter posts on your stream. They can even filter your social stream by social network.
You can also change the style of your stream using the theme manager and/or custom CSS stylesheets, and make it unique to your website.
- To broadcast all your social network news, photos, videos and updates from multiple social network accounts as a single stream to your visitors.
- To create a single social stream for multiple social network accounts with multiple profiles.
- To create a multi-network photo or video gallery on your website.
- To create a news stream from multiple RSS feeds on your website.
- To broadcast all social network news related to a specific search term or hashtag from multiple social media channels on your website.
3 different display modes:
- Wall, Timeline, Carousel Feed
Supports 11 and growing social networks:
- Facebook, Twitter, Pinterest, Flickr, Instagram, YouTube, Vimeo, RSS, SoundCloud, VK, LinkedIn
3 ways to add to your website:
- Add Social Stream on a page
- Add Social Stream as a Module
- The extension is fully responsive which enables you to be used in mobile friendly websites.
Slideshow Presentation View:
- Feature to display social items in full lightbox slideshow for presenting and wide screens.
More than 30 feed options:
- Facebook page wall public posts.
- Facebook album public photos.
- Facebook page public photos.
- Facebook page public videos.
- Twitter user latest Tweets.
- Twitter list Tweets.
- Twitter search Tweets with hashtags.
- Pinterest latest user public Pins.
- Pinterest latest public Pins from a specific board.
- Flickr user latest public photostream uploads.
- Flickr group public photostream uploads.
- Instagram user posts including photos and videos.
- Instagram search posts by tags.
- Instagram latest posts by location ID.
- Instagram search by geographical location.
- YouTube user latest uploads.
- YouTube public playlist uploads.
- YouTube search by term.
- Vimeo user public (a. videos, b. likes, c. appeared in, d. all videos, e. subscriptions, f. albums, g. channels & h. groups) feeds.
- SoundCloud user tracks public feed.
- VK user or community wall feed.
- RSS feed URL latest entries.
- LinkedIn company updates.
- Extension licensing and one year of automatic updates.
- Images lazy load feature.
- Load more items button for Facebook, Twitter, Instagram, YouTube, Vimeo, VK, Flickr, and LinkedIn.
- Option to get specific sets of posts for Facebook page feed. Posts published by this page, or by others, or by both on this page.
- Ability to get posts in a certain datetime in Facebook.
- Ability to get tweets generated in a given date in Twitter.
- Filtering the stream items using a search phrase.
- Ability to order the filter network icons.
- Option to order the stream results by date of item or randomly.
- Limiting the maximum number of results to display on the stream.
- Limiting the title, description & comments' words count to display for each item on the stream.
- Caching of social feeds with defining of cache time to reduce up download time.
- Read more link for long block of texts.
- Ability to open links in new window or parent window
- Option to change the status of links to follow or nofollow.
- Opening images & videos in lightbox window.
- Video icon overlay on stream video items.
- Ability to enable/disable animation and defining the rotate delay & filter direction for Wall.
- Option to adjust the spacing between the columns in the wall.
- Option that, let blocks to adjust and re-size to fill the gap in wall.
- Option to display timeline in one column or based on browser screen width.
- Share posts on Facebook, Twitter or Linkedin from your website.
- Fully documented + all examples.
- Fully extension inline descriptions.
- 5 built-in templates.
- Theme manager.
- Custom layout & CSS stylesheets.
- Allowing to select from different themes.
- Customization of stream body background color, border color, border size, background image & font color for all display modes.
- Customization of stream item background color, border color & border size for all display modes.
- Select the font size for the stream.
- Display your social stream module in different positions on your website.
- Including photo comments for Facebook.
- Displaying number of likes & comments for Instagram.
- Displaying the post type icons.
- Allowing to set the feed block width/height.
- Allowing to set the image width/height for images & thumbnails.
- Allowing to select the width/height for videos.
- Option to define, how to display the image for each item (Boxed or Expanded).
- Option to allow loading images over https.
- Processing multiple IDs per network.
- Processing multiple feeds per network.
- Ability to select the content blocks to be included in each item in stream output.
- Adding unlimited social streams on your website.
- Ability to add multiple social streams in a page on your website.
- The ability to define the number of items displaying in each slide for different screen widths on Carousel layout.
- Supporting for multi-byte character set languages.
- Feature to display social stream items as lightbox slideshow.
- Auto-resize responsive lightbox window.
- Online debug log.
- Ability to translate to any language.
Social APIs Restrictions
There are some restriction from social network APIs that are making some limitations for us on top of building this plugin that are listed here.
- You cannot add a Facebook personal profile feed to the plugin. Only page, album feeds are allowed.
- Twitter account should not be set as “Protect my Tweets”.
- Twitter search API can only grab the items that are posted in the last 7 - 9 days.
- Twitter API does not provide the image data for re-tweets.
- For Instagram User Feed, you can only get results from accounts you have an access token created using that account.
- Pinterest re-shared items will not include the original shared item.
- Unlike the other networks on LinkedIn, you can only get the feed from the company pages or showcase pages that your have under your management not the pages that are not belong to you.
- The extension requires the PHP version 5.6.x or higher.
- The extension requires the Joomla version 2.5 or higher.
- This extension requires both PHP's multibyte string extension AND iconv extension (enabled by default on most servers).
- PHP's XML extension is required (enabled by default on most servers).
- PHP's cURL extension is recommended (required for Facebook & Twitter feeds).
- Simple Social Media Stream 1.0.7 extension package.
- Full documentation files in HTML format.
If you have any questions or suggestions concerning to Simple Social Media Stream extension, please contact us via our website at http://www.asanaplugins.com/support/
= 2.4.0 - 19.03.2019 =
- New : API proxy setup setting added.
- New : Added the sb-img class to all image tags on social stream.
- New : Feature to disable images lazy loading on the social stream.
- New : Adding meta data to carousel items.
- Deprecated : Google+ API deprecated and removed from the component.
- Fix : Twitter embedding connection refused error handled.
- Fix : Fixing the problem of displaying Instagram user info on items.
- Fix : Fixing the load more problem of Instagram user items.
- Fix : Fixing the images lazy loading problem.
- Fix : Fixed the 'call stack size exceeded' issue on wall filtering.
- Fix : The problem of wall relayouting while doing load more fixed.
= 2.3.1 - 14.02.2019 =
- Fix : Fixing the problem of displaying Instagram user info on items.
- Fix : Updating documentation URL.
= 2.3.0 - 13.01.2019 =
- New : Instagram hashtag feed API deprecated and replaced with a new solution.
- New : Feature to get other Instagram user public feeds without an access token.
- New : Flickr Photoset/Album feed support added.
= 2.2.0 - 20.11.2018 =
- New : Adding image proxy feature to support caching images.
- New : Feature to connect and generate Twitter, Facebook and LinkedIn API credentials directly in plugin.
- Fix : Fixing embed images style issue in the carousel.
= 2.1.0 - 02.11.2018 =
- New : Display a preview of simple shared links in tweets on the item.
- New : Feature to add custom height for carousel items.
- Fix : Displaying Facebook Likes and Comments count.
= 2.0.0 - 26.08.2018 =
- New : Adding a feature to display embeds in the twiiter feeds.
- New : Adding Flickr Social Network.
- New : Adding VK Social Network.
- New : Adding SoundCloud Social Network.
= 1.0.9 - 15.08.2018 =
- Fix : Some of notice errors fixed.
= 1.0.8 - 18.07.2018 =
- Page scroll lazy load problem fixed.
= 1.0.7 - 20.06.2018 =
- Fixing issue of custom code Ads.
- Fixing issue of not applying themes.
- Fixing issue of links color.
- Fixing issue of timezone.
= 1.0.6 - 29.04.2018 =
- Fixing the image loading problem when multiple carousels added on the same page.
- Fixing https images loading for Facebook shared item type.
= 1.0.5 - 10.04.2018 =
- Fix: Instagram user public feed deprecated problem fixed.
= 1.0.4 - 12.03.2018 =
- Fix: Loading only published Ads to streams.
= 1.0.3 - 07.03.2018 =
- Fixing the wall re-layout option problem.
= 1.0.0 - 24.10.2017 =
- First release.
Simple Social Media Stream
- Asana Plugins
- Last updated:
- Mar 19 2019
- Date added:
- Aug 13 2017
- GPLv2 or later
- Paid download
- c m p
Write a review
|
OPCFW_CODE
|
thank you, much appreciated!
New version published
- Improved sensor communication check
- Code cleanup
The sensor communication check now checks for the latest communication of any of the capabilities of a sensor.
Would there be a possibility of sticking the “No Information Received” card in the And column?
Reason being I’d like to do something like:
When the time is 7.30pm, and No Information Received and Last Update 24hours and device is not xyz, then send an alert
Basically, so I can get the alerts at the same time every day, when I generally have some way of actioning it?
If there is already a way to do this and I’m missing it please let me know!
At this time that’s not possible, it’s only a trigger card. I can see the need of running the function at a certain time but as said earlier I build this function because the standard battery check functionality in Homey isn’t reliable but Heimdall is not ment to be a device monitoring app so I have to think hard on how I would implement your request.
Can you please create an issue for it on github so I have a reminder and you can track progress?
Posted in Github for you
New version published
- Removed temporary code
Amazing! Thank you!
Unfortunately I had to pull back version 1.0.26 from the approval process. While running it myself over a longer period I discovered a nasty error in the communication test function which I want to fixed first, sorry.
Hello I wanted a trigger like Key fob (Fibaro) to change Surveillanc card in then flows
How to set the Surveillance Mode with a flow is explained in the instructions in the first post of this topic. See Add flows to activate and deactivate the desired Surveillance Mode.
If that’s not what you’re looking for please give some more information on what you’re trying to achieve.
I want to change armed and disarmed in device by a key like this .Shall i yto take homekit app
My english is very bad
Do you have the Fibaro app installed?
Did you add the keyfob to Homey?
Does it show up in the flow editor?
If the answer to all of the above is yes you should just drag the keyfob icon from the left to the when… column in the flow editor.
I cant see armed and disarmed change in the surveillance card in device.I can changed armed and disarmed but it doesnt change in the card . When icome home can i send a picture.
hey i am sorry
Now i have find the right card
Its great app
No problem, glad you have it working now
I am having a strange issue I hope someone can assist with… Recently I tried disarming Heimdall using Apple HomeKit (connected to Homey via HomeyKit app) - however for some reason HomeKit didn’t instruct Heimdall app to disarm - so the alarm sounded. I am not too concerned about this as the app has been running flawlessly for a couple of months now.
The real issue is HomeKit is reporting the security system has been “Triggered” - I cannot remove this state - does anyone know how to clear the “Triggered” state from HomeKit?
I have a problem with installing Heimdall, after installing the settings page doesn’t load… Does anyone have a suggestion? Tried reinstalling Heimdall en rebooting Homey with no luck.
Normally you would either set the Surveillance Mode to Disarmed or just click on the Alarm Button to turn it off. I’ve just checked myself but something changed in either Homeykit (The Homey app) or Homekit (from Apple itself) because for me the Surveillance Mode Switch now shows up as an On/Off switch and no longer as a switch to set the desired mode.
Can you check how your Surveillance Mode Switch operates and share a screenshot?
|
OPCFW_CODE
|
Cuda compatibility
I am trying to install liberate-fhe and i am facing quite a few cuda-related issues, following https://docs.desilo.ai/liberate-fhe/getting-started/installation
Everything is fine until the step "Run CUDA compile script"
Is there a specific cuda version that is needed? My machines are either on cuda 10.1 or 11.5 and i am a bit reticent to upgrade the cuda version as it could mess up with some other projects
Furthermore, if the cuda compiler version is different than the runtime one (nvcc vs nvidia-smi) it also fails to setup the installation. Is there a fix for this or is it intended? I thought that if the runtime version is more recent than the cuda compiler one it should work (as with torch as an example, I have nvcc version 10.1 but torch uses cu118 which works totally fine)
I tried to:
change the cuda12 version from torch to a cu118 one that finds the gpu on my other projects --> fails because torch version do not match nvcc one
install a version of torch that fits the nvcc --> fails because it fails to locate /usr/local/cuda
Thanks in advance
Hello. Thank you for your interest in Liberate.FHE.
The cuda version of our Liberate.FHE is related to the pytorch version.
So for this to work, the cuda version of pytorch you installed must match the version of cuda (nvcc) installed on your system.
Currently, our package build system installs the latest version of pytorch,
so I think you probably have 2.2.x (or 2.1.x) version of pytorch and cuda 12.1 installed.
If you want to check the torch version,
import torch
print(torch.__version__)
# '2.2.1+cu121'
So the easiest way is to change cuda (nvcc) to the cuda version of torch installed.
However, if you are reluctant to change the cuda version due to your other projects, there is another method I suggest.
When you clone our repository, there is pyproject.toml for poetry build, and here is how to change it to the pytorch version you want.
You can change it in the following way.
from
[tool.poetry.dependencies]
python = ">=3.10,<3.13"
numpy = "^1.23.5"
mpmath = "^1.3.0"
scipy = "^1.10.1"
matplotlib = "^3.7.1"
joblib = "^1.2.0"
torch = "==2.2.1"
tqdm = "^4.66.1"
ninja = "^<IP_ADDRESS>"
to
[tool.poetry.dependencies]
python = ">=3.10,<3.13"
numpy = "^1.23.5"
mpmath = "^1.3.0"
scipy = "^1.10.1"
matplotlib = "^3.7.1"
joblib = "^1.2.0"
torch = [
{url = "https://download.pytorch.org/whl/cu115/torch-1.11.0%2Bcu115-cp310-cp310-linux_x86_64.whl"
]
tqdm = "^4.66.1"
ninja = "^<IP_ADDRESS>"
The link I changed as an example is cuda 11.5 version of pytorch 1.11 version and python 3.10.
If there is a specific version of cuda or python you want, just find it in the link and change the address.
Things to consider when downloading pytorch manually are the pytorch version, python version, and cuda version.
However, we confirmed that it works with cuda versions 11.7 to 12.1 and pytorch versions 1.13 to 2.2.1.
There is one more way.
This method briefly changes the cuda version only in the terminal you run, so it won't cause conflicts with your other projects.
At your terminal,
export CUDA_HOME=/usr/local/cuda-10.1
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
And check nvcc version
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Feb__7_19:32:13_PST_2023
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0
And build our project
We build it manually now, but we have already registered it with pypi to make installation even simpler. Please wait a little longer.
For now, that's all I can tell you.
If this doesn't resolve your issue or you have additional questions (about anything related to the Library, not just installation), Please don't hesitate to ask us.
Thank you so much.
Hi again,
Thanks for your help, I can build it with the snippet:
$ export CUDA_HOME=/usr/local/cuda-12.1
$ export PATH=$CUDA_HOME/bin:$PATH
$ export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
and afterwards poetry install. So all the packages are the same as the initial pyproject.toml file.
I can then python setup.py with warnings:
running build_ext
/home/tristan/venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:502: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
/home/tristan/venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.1
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
running build_ext
/home/tristan/venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:502: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
/home/tristan/venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py:424: UserWarning: There are no x86_64-linux-gnu-g++ version bounds defined for CUDA version 12.1
warnings.warn(f'There are no {compiler_name} version bounds defined for CUDA version {cuda_str_version}')
poetry build and poetry run python -m pip install . have no issues nor warnings.
but when importing liberate
import liberate
I got the following error message
Traceback (most recent call last):
File "/home/tristan/test.py", line 1, in <module>
import liberate
File "/home/tristan/venv/lib/python3.10/site-packages/liberate/__init__.py", line 1, in <module>
from . import csprng, fhe, utils
File "/home/tristan/venv/lib/python3.10/site-packages/liberate/csprng/__init__.py", line 1, in <module>
from .csprng import Csprng
File "/home/tristan/venv/lib/python3.10/site-packages/liberate/csprng/csprng.py", line 7, in <module>
from . import (
ImportError: /home/tristan/venv/lib/python3.10/site-packages/liberate/csprng/chacha20_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1017SymbolicShapeMeta18init_is_contiguousEv
which seems to be a cuda related issue.
Do you have a workaround for it ?
Thanks
I have those "undefined symbol" issues as well. Did you find a solution to this issue @tguerand?
|
GITHUB_ARCHIVE
|
Visual Studio Community 2015 Installation (Visual C++)
I am trying to install:
Visual Studio Community 2015 --> ISO
On Windows 7 Professional SP1 (A fresh install)
During installation, the program is saying the following files are either missing or damaged. Any thoughts on how to fix this? There are alot of files. Am I missing some basic windows library that is causing this problem? Any help would be greatly appreciated.
List of files either damaged or missing:
SqlDom.msi
AzureMobileServicesSdkV2.0.msi
DotNetVersionManager-x64.msi
SSDTDBSvcExternals.msi
TSqlLanguageService.msi
16ab2ea2187acffa6435e334796c8c89.cab
12613ba26e037e99a874a64c1084f880.cab
9126f6ff98d955951fe9323f4444c119.cab
05254f60ea43b4e3959b17cdb03268c0.cab
e10f8811d44b50885777f56f8272f66b.cab
07a57cdb41ba28cced14005f087267be.cab
ef4472fd7552490fd759075186ed2ec8.cab
ec9d39539c27e8cf5ad39bffce00c34e.cab
15bc5316e373960d82abc253bceaa25d.cab
5cf1d61a223a02ff2f52fe05f058d52e.cab
5509e4710313421be8d5e7cfbfde4d30.cab
1de82860db02f762c5f65a73daa31f3e.cab
0f5c9874ec8b03b3a2ef2148f76b34cf.cab
bfb5675f5755f6ddacec7ee0cc5328da.cab
SQLSysClrTypes.msi
SQLSysClrTypes.msi
SharedManagementObjects.msi
Looks like your ISO file is corrupted.
"I am trying to install:
Visual Studio Community 2015" - Why install an outdated compiler/IDE? Why not go with the most recent one?
"the program is saying the following files are either missing or damaged. Any thoughts on how to fix this?" - Get a new uncorrupted copy - obviously.
Where would I get an uncorrupted copy? I got this from Microsoft's website directly.
You need an uncorrupted ISO file.. Scroll down to 2015 and download it from here:
https://visualstudio.microsoft.com/vs/older-downloads/
The file I am installing is from the website you've provided. I tried re-downloading and installing. Still has the same error.
In order to verify the integrity of your VS2015 Community, please use the FCIV.exe to verify the hash of your ISO file. Any difference will indicate that your ISO file is corrupted. If so, please re-download the VS2015 Community from here.
BTW, the SHA1 of Visual Studio Community 2015 DVD(English): BAAD3CEBAB7A5834D8F78F7D02E4880C010F3BA9.
|
STACK_EXCHANGE
|
Find all possible dates
You get a string which contains an integer between 1000 and 99999999 as input. The input string might contain leading zeroes but cannot be longer than 8 characters.
Your goal is to output all dates between 6th of December 1919 and 5th of December 2019 which fit the input in these formats:
day - month - year (dmy)
month - day -year (mdy)
year - month - day (ymd)
Days and months can be either one or two digits. Years can be either two or four digits. Also don't forget the leap years!
Your output dates must be all in the same format (choose one). Its days, months and years must be clearly distinguishable (choose a separator or encode them like YYYYMMDD).
You are free to leave any duplicate dates in your output (you're a good boy if you remove them though). If there are no dates to return, return nothing.
Examples:
Input: "1234"
Output: 1934-02-01, 2012-03-04, 1934-01-02
Input: "01234"
Output: 1934-01-02, 1934-02-01
Input: "01211"
Output: 2001-11-01, 2011-01-01, 2001-01-11
Input: "30288"
Output: 1988-02-03, 1988-02-30, 1988-03-02
Input: "999999"
Output:
Shortest code in bytes wins. Have fun!
In your examples you have inputs with leading zeros. However you say that inputs are integers between 1000 and 99999999. Hence 1234 is equivalent to 01234. This needs clarified
Are there situations where a 7-digit integer leads to non-empty output?
@RobinRyder 2018111 should produce 2018-11-01,2018-01-11, if I'm understanding this challenge properly.
1988-02-30 doesn't look like a valid date.
How is 2001-11-01 a valid output for "01211"? How is 2001-02-11 not a valid output for "01211"?
Did you mean for the input to be "01111" for that example?
Ruby, 234 bytes
require'date'
a,b,g=%w(%Y%-m%d %Y%m%d %Y%-m%-d %Y%m%-d %-m%d%Y %m%d%Y %-m%-d%Y %m%-d%Y %d%-m%Y %d%m%Y %-d%-m%Y %-d%m%Y),Date.new(1920)-26,gets
(b..b+36524).map{|d|puts d.to_s if(a+a.map{|x|x.tr(?Y,?y)}).map{|x|d.strftime(x)}.member?g}
Try it online!
I'm willing to bet that this can be shortened this significantly (with regex maybe), but I'm not clever or knowledgable enough to figure that out.
If we are allowed to print the date object itself instead of a string, change puts d.to_s to p d.
The question has been closed now, but for future reference, [g]-array==[] is shorter than array.member?g by 1 byte, and is generally the shortest way to check if an element is in an array.
On the regex generation angle, I feel like this works pretty well, can probably be optimized further but already shaves a solid 40 bytes on its own (since you don't need the a+a.map part anymore): a=%w(Y y -m m -d d);a=a.product(a,a).map{|e|?%+e*?%}.grep /y.*m.*d|(d.*m|m.*d).*y/i
@ValueInk Thanks for the tips.
|
STACK_EXCHANGE
|
In general this problem is solved via an APM solution that profiles each function and shows what takes the longest, but CF doesn’t offer that and I’m not sure of any existing APM solutions that work with Workers.
I first started seeing the spikes with pretty small HTMLs (aprox 1kB).
My first hunch was that these spikes were related to the size of the HTML (more HTML = more CPU work) so I modified the worker to produce a much larger html (aprox 250kB) but the behavior is very similar. Only a very small percentage of requests go above 10ms.
In fact since I started using the worker the median CPU time is 1.7ms.
I don’t know, I’m starting to think Workers are not a good fit for this use case and I should probably move the HTML rendering somewhere else. It’s a shame since I would have preferred having all my infra with Workers.
My first year with Workers I had the same goal, everything in workers. But it’s still too limited, I’ve moved quite a lot of the workload to AWS Lambdas. It’s easy keeping the execution under 100ms there (since I already optimized for Workers), so costs can be closer to Workers. Just have to keep in mind that their API gateway will incur costs for every request and bandwidth have to be calculated. I expect around 100-200% higher cost.
If you consider Lambdas, use the Serverless Framework to manage it or you’ll be pulling your hair out in no-time…
You can create a random worker ID and store it in a global variable with a start time, and log that with each request. That will let you see if it’s a fresh worker or not… We do this with Logflare so you can actually see the invocation of each instance of a worker.
Our CPU limit enforcement is designed to be lenient with random spikes – in part because, practically speaking, there may be no way to avoid them. On a machine that is doing lots and lots of other things at the same time, random noise can cause the same computation to take much more CPU time sometimes.
Basically, you are only at risk of errors if your running average CPU usage is over the limit, or if a single request goes way over, like 10x.
I hesitate to explain the mechanism in more detail because we’re likely to change it soon. But, in short, you don’t need to worry about random single-request spikes.
As for why you’re seeing a spike, I’m not entirely sure. We don’t actually count script startup time in the first request time, so it’s not that. But I think another possibility is lazy parsing. V8 tries to avoid parsing function bodies until the function is first called. If you have a fairly large code footprint, a lot of which gets called on the first request, that could explain the first request running a bit slower, I think. That said, I’m speculating here; it’s hard to say without doing some profiling (which, unfortunately, at present, you wouldn’t be able to do yourself – we’d like to fix that).
|
OPCFW_CODE
|
I want to be able to disable a chart if there is no data on it, and then enable the chart when there is data. The reason why I want to do that is because I have three charts that are sync and I want to disable the other charts that have no data, so that the cursor modifier or the zooming and stuff will not affect the other charts without data and will only affect the one with data.
Attached image is 3 charts that are in sync.
- Nung Khual asked 1 month ago
- last active 1 month ago
Hello Friends ,
Sync two Chart in Android Bottom chart can be change on top chart touch
We are looking a solution where we want show two chart at Top and Bottom in Android Screen.based on Top chart changed by gesture /finger touch should be expend/shrink like zoom in and out bottom chart
Assume at top chart data range has between 0 to 3,00,000 , I want to capture in bottom chart only selected/touched part in with expanded form.
I have attached below screen shot which may be helpful.
Solution can be like this but not able to get code for this implementation
I appreciate any help regarding this issue.
- vasim simform asked 2 years ago
- last active 2 years ago
App version 22.214.171.12423
I am able to sync two charts and receive events between both charts.
However, I wanted to remove the shared y-axis panning between the charts.
I was able to get this working by setting the “withReceiveHandledEvents(false)” linked to the motion event group and setting the “.withYAxisDragModifier().withReceiveHandledEvents(false)”.
However, the current problem being faced, is that, the cursor is now no longer synced between the two charts as if I had “withReceiveHandledEvents(false)” on the motion event group.
I tried using the cursor modifier group, but this did not provide a syncing cursor between the charts.
Is there a way to sync the cursor between the charts without having the y-axis panning synced as well?
- Eyram Dornor asked 3 years ago
- last active 3 years ago
I have a surface with two x-axes and one y-axis. I am plotting 2 lines, which both have a numerical axis for their x-axis. I would like to be able to pan the lines so that I can align them for further analysis. However, I want the scaling on the two x-axes to stay identical. I have implemented the panning, which was straight forward using the XAxisDragModifier DragMode=”Pan”, so that is sorted. However, the scaling on the two axes is not identical, as each of the two lines have values for different x-ranges. This means that when I overlay them, one is more stretched out relative to the other.
Is there a way to lock the scaling for x-axes? See image below, I would like the red lines to be in sync, and equally big. So that 5 cm on the first x-axis is an equal amount of time as 5 cm on the second x-axis
- Lars van der Lee asked 4 years ago
- last active 4 years ago
|
OPCFW_CODE
|
Containers on Polaris
Since Polaris is using NVIDIA A100 GPUs, there can be portability advantages with other NVIDIA-based systems if your workloads use containers. In this document, we'll outline some information about containers on Polaris including how to build custom containers, how to run containers at scale, and common gotchas.
Container creation can be achieved one of two ways either by using Docker on your local machine and publishing it to DockerHub, or by using a Singularity recipe file and building on a Polaris worker node. If you are not interested in building a container and only want to use the available containers, you can read the section on available containers.
The container system on Polaris is
singularity. You can set up singularity with a module (this is different than, for example, ThetaGPU!):
There used to be a single
singularity tool, but the project split in 2021. There are now two
singularitys: one developed by Sylabs, and the other as part of the Linux Foundation. Both are open source, and the split happened around version 3.10. The version on Polaris is from Sylabs but for completeness, here is the Linux Foundation's version. Note that the Linux Foundation version is renamed to
apptainer - different name, roughly the same thing though divergence may happen after 2021's split.
Build from Docker Images or Argonne Github container registry
Docker containers require root privileges, which users do not have on Polaris. That doesn't mean all your docker containers aren't useful, though. If you have an existing docker container, you can convert it to singularity pretty easily on the login node. To build the latest NVIDIA container for PyTorch you can run the following:
latest here mean when these docs were written, summer 2022. It may be useful to get a newer container if you need the latest features. You can find the PyTorch container site here. The tensorflow containers are here (though note that LCF doesn't prebuild the TF-1 containers typically). You can search the full container registry here.
Build with a Recipe
You can also build a singularity container using a recipe file. Detailed instructions for recipe construction are available on the Singularity Recipe Page. You can also check our singularity recipe example for building a mpich version 4 container on Polaris.
Once you have a recipe file, you can build it on Polaris, but only on compute nodes. You can launch an interactive job using the attribute
singularity_fakeroot=true to build on a compute node.
You need to replace the
<project_name> with the appropriate project to charge and
preemptable queues since we only request a single node.
After your interactive job has started, you need to load the
singularity module on the compute node and export the proxy variables for internet access. Then you can build the container as shown below.
Alternatively, you can just pull the mpich 4 image distributed by us and build on top of it
Running Singularity container on Polaris
Example submission script on Polaris
To run a container on Polaris you can use the submission script described here. Below we have described the submission script for your understanding.
First we define our job and our script takes the container name as an input parameter.
We move to current working directory and enable network access at run time by setting the proxy. We also load singularity.
This is important for system (Polaris - Cray) mpich to bind to containers mpich. Set the following environment variables
Set the number of ranks per node spread as per your scaling requirements
Finally launch your script
echo C++ MPI
mpiexec -hostfile $PBS_NODEFILE -n $PROCS -ppn $PPN singularity exec -B /opt -B /var/run/palsd/ $CONTAINER /usr/source/mpi_hello_world
echo Python MPI
mpiexec -hostfile $PBS_NODEFILE -n $PROCS -ppn $PPN singularity exec -B /opt -B /var/run/palsd/ $CONTAINER python3 /usr/source/mpi_hello_world.py
The job can be submitted using:
If you just want to know what containers are available, here you go.
- For running mpich/MPI containers on Polaris, it can be found here
- For running databases on Polaris. It can be found here
- For using shpc - that allows for running containers as modules. It can be found here
- Some containers are found in /soft/containers
The latest containers are updated periodically. If you have trouble using containers, or request a newer or a different container please contact ALCF support at
Permission Denied Error: One may get a
permission deniederror during the build process, due to a nasty permission setting, quota limitations, or simply due to an unresolved symbolic link. You can try one of the solutions below:
- Check your quota and delete any unnecessary files.
- Clean-up singularity cache,
~/.singularity/cache, and set the singularity tmp and cache directories as below:
- Make sure you are not on a directory accessed with a symlink, i.e. check if
pwd -Preturns the same path.
- If any of the above doesn't work, try running the build in your home directory.
Mapping to rank 0 on all nodes: This is mainly due to container mpich not binding to system mpich. It is imperative for the container to have mpich which can bind dynamically to system mpich at runtime. Ensure your submission script has the following variables and modules loaded (see below). If this does not resolve, ensure the containers mpich is built with the '--disable-wrapper-rpath' flag. Please refer to this link to find examples of building a mpich based container from scratch and running on Polaris.
libmpi.so.40 not found: This may be due to mpich binding to the wrong system mpich. Try removing .conda & .cache & .local folders from your home directory. Also rebuild your container and try again.
Containers built with openmpi may not work correctly. Please ensure your container is built with mpich and the base image is of Debian architecture (For e.g. Ubuntu) image.
|
OPCFW_CODE
|
Now, I was thinking, if every time an entity moved (player or NPC) all players were notified, the bandwidth would quickly add up, so I came up with the following idea:
This is a big problem that is the subject of a lot of active research. Like most of MMOG development, it's always a case of tradeoffs - PC's these days are still far too puny to handle MMOG's without a lot of compromises.
A data structure will be maintained for each zone that stores the position of each active entity (player and NPC). Each active will have a list of other actives within its "square of influence" (a few grid squares past the edge of the screen). Whenever an active moves, it will first drop any references that have left the square of influence, then acquire any new references that have entered the square on influence, and finally notify all references within the square that the move has been made so they can update their respective client displays.
This will certainly work. However, 99% of the approaches to MMOG development "work"; it's just they break whenever e.g. "more than X players in the game", either because the algorithm uses too much memory or too much CPU.
If you want advice on what algo to use for this task, you need to sit down and very carefully think about your requirements first. You haven't, for instance, told us the most important thing of all - which is the max number of players that any given player will ever be able to "see". Equally, we'd need to know what limit you're going to place on the number of players in your game.
The algo itself is perfectly good...indeed it sounds just like one that I think Ensemble Studios (Age of Empires people) proposed - although they suggested a more generic system that enabled an RTS to constantly update info in multiple different dimensions (e.g. "Line of Sight + FoW", "Pathfinding", "desirable locations", etc). The aim was to provide a system that had a high cost per unit-movement, but a very very low cost per query - so that units could calculate e.g. whether they could see another unit almost instantly. Works particularly well when you have many more units "looking" at the map/data than are moving.
This system works based on the assumption that active A can "see" active B if and only if B can "see" A. This data structure could also be extended to allow the passing of communication messages to everyone on the screen.
Actually, it makes a lot more assumptions than that, mainly to do with performance, relative numbers of moving objects, etc etc. Hopefully what I've mentioned above gives you a small taste of this?
To be honest, I hope you aren't really attempting an MMOG unless you've got a large team or financial backing or something
. You'd be much better off doing a 16-player or 64-player or etc game. MMOG development is really really really hard, and although it's certainly within the reach of anyone, there's a lot of specialist stuff you have to learn in order to make your game work OK (in addition to getting all the standard things from game development right - graphics, sound, networking, collision detection, AI).
If you (or anyone else) really want to work on an MMOG, I'd suggest getting involved with WorldForge.org (Open Source MMOG development). They've been going for about 6-7 years, and have developed lots of technology for MMOG game development. The main benefit of getting involved is learning all about MMOG development from people who have been working through the issues for years. The main problem is that it's OpenSource, so people come and go, and there's a very large number of people with little or real understanding of MMOG technical problems at any one time. On the plus side, they have something like 50+ talented artists constantly churning out graphics for use in any WorldForge-based game (and it's OS, so you can go ahead and start a new game based on their tech).
|
OPCFW_CODE
|
Update 8/15/2007: Corrected annual fee information.
|At a Glance|
|Product||Yoggie Gatekeeper Pro (YGKPPRO01)|
|Summary||Cool new category of security appliance. Provides a strong suite of security features for a reasonable price in a tiny, portable package.|
|Pros||• Almost perfectly cross-platform compatible
• Very small for portable use
• Can protect one, or multiple computers
|Cons||• One more thing to carry on the road
• Support from Israel imposes some delays
• Relies on client software for spyware and virus detection
The Yoggie Gatekeeper Pro is the second product I've reviewed for SmallNetBuilder that promises unified threat management (see D-Link DSD-150: Good idea, flawed implementation). The idea of the Yoggie Gatekeeper Pro is to take big enterprise information technology security and bring it painlessly to SOHO networks (and laptops) to provide UTM (Unified Threat Management).
Figure 1 shows you what comes in the box with a Yoggie Gatekeeper Pro. The manual is a 12 page double-sided map-folded brochure where each page is 4.5" by 4.5". You get an Ethernet cable and a CD with Version 1.03 of the Yoggie software setup program.
Figure 1: What you get
Figure 2: Front panel
The product shot above (Figure 2) shows the (extremely small) icons on the face of Yoggie. The blue LEDs on my evaluation unit were also very small. To monitor the LEDs on the Yoggie you need to keep the unit perpendicular to your angle of view. The SD activity LED seems to indicate something besides SD activity as the LED blinks pretty much constantly, even when an SD card is not in the Yoggie.
The back side of Yoggie has a rubber door covering the rear. On the left of Figure 3 is the recessed reset button. In the middle is the SD card port, used for recovering / restting your password (more on that shortly). Finally, on the right-hand side is a power port (the optional power adapter is $17.99 on Yoggie's web site).
Figure 3: Yoggie Gatekeeper Pro - back view
With double-sided hook-and-loop or duct tape, you can mount the Yoggie Gatekeeper Pro as a semi-fixed security appliance for a laptop. While this seems like an awkward way to protect your laptop, it could be an interesting way for IT departments to give back some user-friendliness while controlling risk. For example, by turning off the annoying Vista confirmation that Apple made its "Security" commercial about, while retaining a pretty robust list of security features that do not need to be installed in Vista.
There are two ways to set up Yoggie with a laptop. You can connect LAN cables (in-line) to put Yoggie between your computer and the Internet (Figure 4). If you set up your computer using the in-line method, Yoggie defaults to a 192.168.4.x public domain subnet. My home subnet is 192.168.3.x so this wasn't a conflict for me, but it is conceivable that you might run into a conflict if the hotel you stay in happens to have a 192.168.4.x subnet.
Figure 4: Yoggie in action (wired version)
When I saw the .4.x subnet initially, I assumed that the Yoggie took whatever public domain IP it was on (i.e., the 192.168.y.x) and incremented the subnet (y) by 1. Actually, no, the subnet is settable in the Management Console. When you set up Yoggie in the USB-only wireless configuration, your Ethernet ports use DHCP as assigned by your local DHCP server.
The other connection method is to dangle Yoggie by itself from a USB port, so that it will redirect and filter wireless and wired Ethernet packets. The USB-only configuration is called the "wireless" configuration in Yoggie's manual even though it filters both wired and wireless packets.
Figure 5 shows Yoggie connected in its "wireless configuration". The wireless configuration requires Yoggie to be plugged in to a USB port for power; when using the in-line configuration you can either plug Yoggie in to your USB port, or, you can buy an optional extra power supply to provide power.
Figure 5: Yoggie in action (wireless version)
The Yoggie USB cable is a rubberized, flat cable, that acts like a collar going around the face-plate of the unit. You unplug the USB connector, unwind the USB cable, and then plug it into a USB port.
In addition to connecting a Yoggie Gatekeeper Pro to laptops, Yoggie can be used on LANs (see Figure 6). Install a Yoggie between your router and your modem to implement security from the outside-in. If you do this you will enter the "port forwarding zone" and have to forward ports for the games, incoming mail, ssh, etc. that you'll need.
In fact, Yoggie is very similar to my ClarkConnect 4.1 Enterprise server. The main differences are that Yoggie is less complex (no root access means more safety and less tinkerability), Yoggie does not have a proxy server cache to speed up all the client PC updates that I do and Yoggie does not have an annual fee attached to it like ClarkConnect's $85 a year.
Update 8/15/2007: Update subscription is $40/year, with the first year included in the purchase price.
Figure 6: Yoggie Gatekeeper SOHO
The best sumary of what Yoggie does and how is provided by a product comparison page on Yoggie's web site. I've reproduced the comparison in Table 1.
|
OPCFW_CODE
|
Windows CE Jobs
Windows CE, or Windows Embedded Compact, is an operating system developed by Microsoft in 1996 for small devices and embedded systems, such as point-of-sale (POS) terminals, digital signage displays, and factory controllers. To effectively and efficiently use this operating system, a client may need to hire a Windows CE Developer who can create applications and/or customize features of the OS. With their knowledge in creating software components to interact with the OS’s kernel, writing drivers to interact with new types of hardware, or developing custom nodes come configuration tools that enable users to make changes to the OS’s settings, a Windows CE developer has the skills to make machines run smoother and more effectively.
Here's some projects that our expert Windows CE Developer made real:
- Solving complex constraints to install software on PC
- Configuring software for optimum performance
- Setting up authentication access for data security
- Handling hardware functions per customer requirements
- Creating effective communication between devices
- Developing machine learning components for smart technology
- Producing user interface that can be used easily by end users
With this impressive list of accomplishments from our Windows CE developer team, it's no wonder our clients are happy with the results. There are countless applications that can be created with this software and hardware combination. A great way to find out more is to post your project of hiring a Windows CE Developer on Freelancer.com today and take advantage of the expertise of our talented professionals.Od 2,449 ocen, stranke ocenjujejo Windows CE Developers 4.97 od 5 zvezdic.
Najem Windows CE Developers
Unable to use application due to non-availability of database for application server. In any case if any one of the database servers are down system should automatically switch to different node to make sure application should be running and end user should not have any impact for both execution and downloading reports.
Unable to use application due to non-availability of database for application server.
We have an application built to work on a Windows CE OS and need it converted/built to work in and Android OS environment
I need to build for either chrome or any browser to communicate with the BLE devices within range and get a list that can be put into an array and sent to an external system. We tried to use the JS The Web Bluetooth API however it returns the list of discovered devices in a pop-up that we can’t read. As far as I can tell we cannot bypass this using the JS within, so we need to have some that can be browser-based We are open to the solution of your choosing as long as it fits our requirements in the attached document. We currently have a windows package that can be downloaded however we have some clients that do not want to have it installed on the computer for security rules or do not want to have a windows account to download it from the Windows store, and we will be deployi...
|
OPCFW_CODE
|
Please don't get scared by a little mathematics. This appendix is written to be easy to follow, thought provoking, and worthwhile to read. There is nothing involved in the equations other than addition, subtraction, multiplication, and division. All the important stuff is explained using high school English sentences – in logic you can follow.
Locating the center of a population is easy if one is satisfied with some assumptions and mathematics. From the internet one can get the latitude and longitude for each member’s address. Ignoring some minor effects related to the non-flatness of the Earth, the “geometric mean center” of a population is the point defined by the average latitude and longitude. That's the answer. The computation to prove it will be discussed after some other details.
Given the latitudes and longitudes of two points, there are complications involved in calculating their separation in miles. For our purposes we will assume that Ohio is flat, and small enough that miles per degree of latitude, or longitude, are constant statewide. This allows us to use the Pythagorean relation you remember from your high school class in trigonometry – – to compute theoretical straight line distances between points. The error introduced by these assumptions is negligible inasmuch as we generally travel by roads, and not “as the crow flies.”
At the Equator there area approximately 69.17 miles per degree of either latitude or longitude. Distances between degrees of latitude (from North to South) are constant regardless of how far one is from the Equator. For our calculations, let's round up to an even 69.20 miles per degree of latitude.
Distances between degrees of longitude (from West to East), however, vary with the cosine of the latitude at which the distance is being measured, falling to zero at the poles. Ohio’s latitude, which is to say it's distance from the Equator, is about 38.50° at _____________ and about 41.50° at _____________. In order to calculate the distance between degrees of longitude, we will split the difference and assume 40°. To calculate the distances between degrees of longitude in Ohio, then, we just apply a simple formula: 69.20·cos(40°), which comes to 53 miles per degree of longitude.
The point we want to find is the center of population. If we knew this point (its latitude and longitude), we could compute its distance from each of the member's homes. To be called the center of population, there must be something minimum about these distances. We will choose the point such that the sums of the squares of all these distances is that minimum. The point thus computed is called the geometric mean center. Other “somethings to be minimum” are possible, but this “minimum sum of squares” is not only the most generally accepted definition, but differs only a little from a location computed by other practical choices (and it also simplifies the math.)
Now for the math: Let the home of member be located at – that is, at latitude and longitude . Let the candidate center point be at . (We have let run East-West and run North-South, but these are arbitrary choices.)
The distance squared between and is . That is, is the diagonal distance between the address of member and the candidate center. The distance , and likewise , may be either positive or negative – consequently, the direction of the diagonal can be along any point of the compass.
Given members, the sum of distances squared for all members is:
The big (the Greek capital letter sigma) is the addition symbol. Together with its subscript and superscript, it tells us to sum all of the distances squared.
is just a number – it is the sum of all the distances squared. Our problem is to solve for the unknowns, and . The equation is true for any and , and, in fact, any location close enough to Ohio that the Earth can be considered flat. The problem is to compute the pair that gives the smallest value of . This will be done using 1950's high school mathematics.
Before proceeding, note that the and variables can be summed separately and then added together:
That is, the and terms can be summed separately and then added together.
Still needs to be proofread below this line
Consider the point , close to . Think of as a tiny distance, although this condition is not important until later. Computing the sum of squares of distances from members homes to this new point, we get:
Now we square the terms in parentheses. Squaring the triple term will not be too hard, will it?
Also note that is a distance, so is the same as . We get:
The next three steps use basic high school algebra. Combining like terms, we are left with:
Summing the terms separately gives:
In the first term is added to itself times, so this term simplifies to . Similarly, the third term is . The second term involves the sum of all , which is simply times the average . Average is commonly denoted by an overbar, so the middle term simplifies to . ( is pronounced " sub bar").
Next, we factor from each term, and get:
Now its time for some straight thinking. Suppose we had considered a point , instead of : The end result would have been an identical equation, and the same concluding logic, but with instead of . If the point was offset in both directions, we would have:
Recall that the point is the population center point that we are solving for. If this point is at the average coordinate values, the terms in braces are and . Either or , may be negative, but must be either zero or positive.
Proofreading of following has been done
Thus, if and and if and each equal one mile, then the sum of distances squared from to all members' homes is square miles. If the 's are each one foot, then the sum of distances squared is square feet. Likewise for inches, millimeters, etc.
Now for the punchline: Any candidate center point, different than and has a larger sum of squared distances to the members homes.
Now, remember: We said is to be tiny. That was an understatement – we really want it to be infinitesimal. OK, you've got to think a little harder now about what that means. But if you've read this far you must be having fun. Since is infinitesimal, what's in the parentheses is just . Finally, summing the and terms separately, we get the change in due to a very tiny :
Surely, there was no problem noting that .
For defining population centers, one may wish to favor locations closer to certain members' addresses, e.g., officers, ritual team members, etc. Such favoring is arbitrary but is easily accomplished in the math. For example, if one wants to favor a point nearer the Worthy Chief's address, one can weight the distance to his address as more important than the distances of other members. The computation with weighted distances is essentially the same as without.
|
OPCFW_CODE
|
Fix './cluster.sh list' on AWS when some VMs have no name
In case some AWS VMs have no name (VMs not managed by openshift-online-ansible),
./cluster.sh list fails with the following error:
./cluster.sh list
/home/lenaic/doc/prog/RedHat/openshift-online-ansible/lib/aws_helper.rb:31:in `sort_by': comparison of Array with Array failed (ArgumentError)
from /home/lenaic/doc/prog/RedHat/openshift-online-ansible/lib/aws_helper.rb:31:in `sort_by!'
from /home/lenaic/doc/prog/RedHat/openshift-online-ansible/lib/aws_helper.rb:31:in `get_hosts'
from /home/lenaic/doc/prog/RedHat/openshift-online-ansible/lib/aws_command.rb:118:in `list'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:115:in `invoke'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor.rb:235:in `block in subcommand'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor.rb:359:in `dispatch'
from /home/lenaic/.gem/ruby/2.2.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
from ./cloud.rb:27:in `block in <main>'
from ./cloud.rb:25:in `chdir'
from ./cloud.rb:25:in `<main>'
With that fix, we get the expected result:
./cluster.sh list
Name Env State IP Address Created By
---- --- ----- ---------- ----------
UNSET UNSET running <IP_ADDRESS>
test-openshift-master-92675686da test running <IP_ADDRESS> lenaic
test-openshift-node-2a43dcb0b4 test running <IP_ADDRESS> lenaic
test-openshift-node-94af201376 test running <IP_ADDRESS> lenaic
:+1:
@twiest: this should be ready for merge as well.
:+1:
|
GITHUB_ARCHIVE
|
Principles from software development
- Principles from software development
- 5 Whys
- Brooks's law
- Dependency Injection (DI)
- Fail fast
- Imperative vs Declarative style of programming
- Live and die by documentation
- Minimum viable product
- Pareto principle
- Pastel's law
- Principle of simplicity
- Reinventing the wheel
- Rubber duck debugging
- Shiny Object Syndrome
- The next action
- The twelve-factor app
- The Zen of Python
- Using common sense
- Extreme programming (XP)
The primary goal of the technique is to determine the root cause of a defect or problem by repeating the question "Why?" Each question forms the basis of the next question. The "5" in the name derives from an empirical observation on the number of iterations typically required to resolve the problem.
The technique was formally developed by Sakichi Toyoda and was used within the Toyota Motor Corporation during the evolution of its manufacturing methodologies.
The benefits of asking why (according to GTD):
- it defines success
- it creates decision-making criteria
- it aligns resources
- it motivates
- it clarifies focus
- it expands options
An After Action Review is a structured review or de-brief process for analyzing what happened, why it happened, and how it can be done better.
To apply this tool ask yourself:
- What was supposed to happen?
- What did happen?
- What are some improvements?
- What are some sustainments?
- What can be done to improve the result?
Adding manpower to a late software project makes it later.
Dependency Injection (DI)
A form of inversion of control, dependency injection aims to separate the concerns of constructing objects and using them, leading to loosely coupled programs.
Dependency vs Composition vs Agregation
If the contained object cannot exist without the existence of container object, then it is called composition (engine and a car).
Aggregation: the child object can exist outside the parent object (driver and a car).
Don’t Repeat Yourself.
A software engineering principle stating that "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." It first appeared in the book The Pragmatic Programmer by Andy Hunt and Dave Thomas.
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
The Pragmatic Probrammer by Andy Hunt and Dave Thomas
Easier to ask for forgiveness than permission.
try: x = my_dict["key"] except KeyError: # say sorry
A system design approach recommending that errors should be reported as early as possible.
Imperative vs Declarative style of programming
Declarative style of programming tells a computer what to do without specifying how, while an imperative style style of programming describes how to do it.
Stands for "Keep It Simple, Stupid."
Simplest solution is often the best.
This calls for seeking the simplest possible solution, with the fewest moving parts. The phrase was coined by Kelly Johnson, a highly accomplished aerospace engineer who worked in the real Area 51 designing some of the most advanced aircraft of the 20th centure.
Live and die by documentation
Live and die by documentation.
Software design, decisions, plans have to be documented.
Minimum viable product
MVP is the product with the highest return on investment versus risk. It is the sweet spot between products without the required features that fail at sunrise and the products with too many features that cut return and increase risk. The term was coined and defined by Frank Robinson, and popularized by Steve Blank, and Eric Ries.
The first step is to enter the Build phase as quickly as possible with a minimum viable product (MVP). The MVP is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time.
A programming paradigm based on the concept of "objects", which are data structures that contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods.
There are four main principles:
Code smell - a symptom of bad OO design.
In programming languages, encapsulation is used to refer to one of two related but distinct notions, and sometimes to the combination thereof:
- A language mechanism for restricting access to some of the object's components.
- A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
Abstraction is a technique for managing complexity of computer systems. It works by establishing a level of complexity on which a person interacts with the system, suppressing the more complex details below the current level.
[We] started to push on the inheritance idea as a way to let novices build on frameworks that could only be designed by experts.
Alan Kay, The Early History of Smalltalk
Inheritance is when an object or class is based on another object (prototypal inheritance) or class (class-based inheritance), using the same implementation (inheriting from an object or class) specifying implementation to maintain the same behavior (realizing an interface; inheriting behavior).
For many events, roughly 80% of the effects come from 20% of the causes.
80 percent of results will come from just 20 percent of the action.
Vilfredo Federico Damaso Pareto one day noticed that 20 percent of the pea plants in his garden generated 80 percent of the healthy peapods. Then he discovered that 80 percent of the land in Italy was owned by just 20 percent of the population, 80 percent of production typically came from just 20 percent of the companies.
Or robustness principle.
Be conservative in what you do, be liberal in what you accept from others.
Principle of simplicity
Simpler is better.
Reinventing the wheel
Do something again, from the beginning, especially in a needless or inefficient effort.
Rubber duck debugging
Rubber duck debugging is an informal term used in software engineering for a method of debugging code. The name is a reference to a story in the book The Pragmatic Programmer in which a programmer would carry around a rubber duck and debug their code by forcing themselves to explain it, line-by-line, to the duck.
Single responsibility (SRP)
Gather together those things that change for the same reason, and separate those things that change for different reasons.
Robert C. Martin, Single Responsibility Principle
According to Wikipedia:
In object-oriented programming, the single responsibility principle states that every class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility.
Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.
Liskov substitution (LCP)
States that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e., objects of type S may substitute objects of type T) without altering any of the desirable properties of that program (correctness, task performed, etc.)
Interface segregation (ISP)
ISP splits interfaces which are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them. Such shrunken interfaces are also called role interfaces. ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy.
Dependency inversion (DIP)
A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend on details. Details should depend on abstractions.
Shiny Object Syndrome
or SOS - Eclectic Thoughts on Shiny Distractions.
Test-driven design (also seen test-driven development).
The next action
A powerful principle from GTD technique: Next action must be defined.
The secret of getting ahead is getting started. The secret of getting started is breaking your complex overwhelming tasks into small, manageable tasks, and then starting on the first one.
Next action - the next physical, visible activity that progress something toward completion. It is specific enough so that you know where it happens, and with what tools (if any). What "doing" looks like.
The twelve-factor app
The twelve-factor app is a methodology for building software-as-a-service apps.
The Zen of Python
PEP-20 by Tim Peters:
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one - and preferably only one - obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea - let's do more of those!
Using common sense
Your common sense is your natural ability to make good judgments and to behave in a practical and sensible way.
Extreme programming (XP)
A software development methodology intended to improve software quality and responsiveness to changing customer requirements.
- extensive code review
- unit testing of all code
- not programming features until they are actually needed
- a flat management structure
- code simplicity and clarity
- expecting changes in the customer's requirements as time passes and the problem is better understood
- frequent communication with the customer and among programmers
Extreme Programming Explained (1999) by Kent Beck.
Stands for "You Ain't Gonna Need It."
A slogan to avoid implementing functionality that is not immediately necessary based on assumptions about future needs.
|
OPCFW_CODE
|
RSNA Measurement Crowd Sourcing tool
Based on crowdQuant
Setup and Run
- git clone https://github.com/QTIM-Lab/rsnaCrowdQuant
- npm install
- npm install connect-livereload --save-dev (?)
- npm start
To build the application you can run this command in the root filder:
$ npm run build
This will create the
bundle.js and the
This will start a local server that communicates with a live Couch DB.
Build Views in the DB
- tag (with collection-tag (by anatomy e.g. liver, lung)) images on upload to database - Jayashree
- build view per collection-tag by enhancing views.py - Jayashree -> use filename path to create view
- add kiosk mode as query parameter -- don't the username -- login times out after 2 minute
- improve the registration form with a few big buttons for the categories
- statistics page - Steve to make stubs, Jayashree to create infovis
- continuous replication to a backed up disk - operational plan TBD with RSNA docent
- about box with acknowledgements
- tutorial info
- auto window/level to get lung window/liver ? - Rob
- Android (maybe ios?) Can make 2 Length Measurements on the same image
- Potentially change least measured to be an array of all "least measured" and select one at random
- log skipped cases
- annotator is hardcoded -fix that
- make getNextSeriesForAnnotator call parallel
- Zoom resets when the windows is resized - not sure if intentional (ALB)
- address "Uncaught Error: image has not been loaded yet"
- consider moving save button to avoid accidental selection of skip button instead.
- add progress sort on download -Steve
- measurement disappears under certain window - maybe make measurement a different color/line width
- investigate how "U" showed up a annotator in DB ?
- consolidate databases? - probably not needed for now
- (bug) If you zoom out extremely far until the original image looks like a dot, it is very hard to zoom back in because the center of the image changes. This might be done by accident, so when someone presses Clear, zoom should reset -- but it isn't. We should fix this. (ALB)
- Map up/down keyboard keys to change slice, in case mouse wheel/pad not available
- You can't submit a measurement if you're not on the same slice as your measurement. Given people's tendencies to "check their work" on other slices, and the difficulty of navigating back to one particular slice, perhaps we should lift this requirement (ALB)
- pinch zoom on mobile?
- include seriesUID with measurements (still need to see if this is the right way)
- (bug) in a session: first case sends 1 measurement, second sends 2, third 3, etc.
- need to record position of start/end of line drawn lengths.data.handles.end.y near line 40 commands.js
- include slice index with measurements
- change browser tab title from "lightweight viewer" to "RSNA CrowdQuant"
- (bug) save in hamburger does not record annotator (removed instead)
- remove hamburger
- remember username in localStorage for easier re-login
- include slice UID with measurement
- investigate why scrolling is not working on iPad - works with 3 finger scrolling
- livereload is timing out on the real site: Fix or Remove
- see if we can save a screenshot with measurement document - Steve
- add ability to skip the case (e.g. when there is no tumor)
- select next image with tag and fewest measurements - Steve to make query function
- (bug) logging in as existing user starts with series index 0 again (already measured cases) - Steve to provide query function to suggest next case that is least reviewed and not already reviewed by this user
- investigate 2 finger scrolling for mobile - Rob
- skip case does not work
|
OPCFW_CODE
|
Don’& rsquo; t take too much care of it It’& rsquo; s unusual to begin a rosemary care article by telling you not to take too much care of the plant. However it’& rsquo; s the initial item of advice we wish to give you. Although there are points that your rosemary plant will certainly not tolerate, as a whole it is a fairly hardy plant that does not require much treatment.
So you know, if you are a person who doesn’& rsquo; t have much time to take notice of plants, rosemary is just one of the most effective alternatives for you.
Basic treatment of rosemary
Lighting as well as temperature level
Rosemary needs to be in a location where it receives a lot of all-natural light. With rosemary you put on’& rsquo; t need to bother with whether it & rsquo; s obtaining way too much sunlight or otherwise. Your rosemary plant can be subjected to direct sunlight for several hours a day.
As we have actually stated, rosemary is a fairly durable plant, so it can adjust to various kinds of weather condition. Although the plant prefers warm climates, it will certainly stand up well in cold climates. What you need to keep in mind is that you need to safeguard your rosemary plant from drafts in any season.
We have currently pointed out that if you put on’& rsquo; t have excessive time to look after a plant, rosemary is for you. You wear & rsquo; t need to sprinkle your rosemary plant too often. Rosemary is used to not obtaining excessive water.
To sprinkle it appropriately, you must observe the dirt where the rosemary plant is expanding. When the substratum looks dry, it is time to water. Water reasonably, to make sure that you hardly obtain the soil wet. Under no conditions must you enable water to gather in the substratum. This can harm the roots of the rosemary.
Rosemary appropriates for all sorts of dirt. Nonetheless, the soil must be fundamental for it to develop better. In addition, it should be a really well-drained dirt, which does not retain moisture but rather tends to dry. Use a substratum that is not too heavy.
You wear’& rsquo; t need to fret about fertilizing the substrate. Nevertheless, if you wish to do it to help your plant expand, you can use any kind of plant food with sufficient organic matter. The optimal frequency for fertilizing is every year.
The trimming of rosemary is fairly vital. Remember that trimming promotes more powerful as well as healthier development in your plant. Benefit from pruning to remove leaves and also blossoms that look negative or withered, for example. But even more crucial is that you eliminate the weak stems. This way, each new stem that grows will be more powerful and also will not flex.
Lastly, you can utilize trimming to give your rosemary plant the shape you desire. This is a plant that expands widely, so if you have a restricted location for it, it is best to trim it. Always aid on your own with a sharp, tidy set of scissors. You know that if the instruments you trim with are not disinfected, you run the risk of your plant getting sick.
The greatest pest our rosemary plant encounters is fungus. You understand that rosemary is not a friend of moisture, and component of this is due to the fact that wetness contributes to the development of fungus on its roots.
If you notice any signs that your plant is unwell, it is very essential that you prune the components that are being affected, or the disease will certainly continue to spread to the whole plant.
You can find any disease via areas on the leaves, poor smells originating from the plant, visibility of bloodsuckers, among others. In any of these scenarios, you ought to most likely to your favorite horticulture store and request for recommendations.
You are currently all set to harvest your rosemary. You only need to take a couple of stems straight from the plant. If you need some suggestions on how to utilize it, right here are a few: utilize it in a dish, for some home medication or to make yourself a relaxing tea. You will see how helpful it is to expand your own rosemary at home.
Tell us about some use you have actually given to the rosemary branches in the remarks.
|
OPCFW_CODE
|
MySQL Indexes: What is Size field in Indexes?
What is the Size field in Indexes and how it works?
MySQL can use indexes on columns more efficiently if they are declared as the same type and size. In this context, VARCHAR and CHAR are considered the same if they are declared as the same size. For example, VARCHAR(10) and CHAR(10) are the same size, but VARCHAR(10) and CHAR(15) are not. mysql ref
When you create an index, you can specify that only a prefix of the value should be included in the index. This is the size of the index. It's optional for most datatypes, but required for TEXT and BLOB columns. It's also needed for any column if its length exceeds the limit on index size; for instance, InnoDB's index size limit is 767 bytes for some table formats; if you want to index a VARCHAR(1023) column, you'll need to specify a prefix size less than 768.
See Column Prefix Key Parts
This is how much of the value will be uniquely indexed. You can index the whole value on a one-to-one basis, or you can index just a prefix which might put multiple values into one bucket. This is a performance/space trade off.
Here is a simplified example.
If you were to create an index with a single character...
create table animals (
name varchar(255),
index(name(1))
);
That will only index the first character of each name.
index name
----------------
A Ape
A Aardvark
A Ant
A Anteater
B Baboon
C Cat
D Dog
D Dingo
So when you query where name = 'Aardvark' it will use the A index to find a list of Ape, Aardvark, Ant, Anteater and search it. The index improves the performance of the query, but there's still some searching to do.
Let's say you had index(name(3)).
index name
----------------
Ape Ape
Aar Aardvark
Ant Ant
Ant Anteater
Bab Baboon
Cat Cat
Dog Dog
Din Dingo
Now when you query where name = 'Aardvark' it will use the Aar index to find just Aardvark and will perform fast. But if you search for where name = 'Ant' it will use Ant to find Ant, Anteater and have to search that list.
You need to make the decision between index size and performance that fits your data and queries.
A practical example, say I'm storing SHA-1 checksums as text. Those are 40 characters long. But for all practical purposes the first 7 or 8 characters are very, very likely to be unique. So I store all 40 characters, but only index the first 8.
checksum char(40),
index(checksum(8))
Now where checksum = '97531bc4cb33c00f3e9ff10d65386b8e96cdae3d' will use the 97531bc4 index and likely produce a single value. This potentially saves a lot of space without any impact on performance.
|
STACK_EXCHANGE
|
Perform connection check using head request if http_proxy ist set
I guess this is an edge case, but I was trying to run maestral's cli on a server where connections are only possible through a proxy. The Python SDK from Dropbox uses the requests python package which handles environment variables like http_proxy or similar accordingly.
Maestral's check_connection function uses sockets to establish a connection with dropbox.com over Port 80 which was not possible on my machine. This PR uses a HEAD request (which uses the *_proxy environment variables) if http_proxy is set and uses the existing socket method otherwise.
Thanks for catching this, we should definitely respect proxy settings!
Is there any reason not to default to a head request instead of using socket.create_connection()? I'm not that comfortable with the "http_proxy" in os.environ check, it probably misses several cases such as https_proxy or upper-case environment variables.
Finally, could you add a line to CHANGELOG.md?
I think the lower case variant is the most commonly used, but I guess checking for both cases would better. The requests package seems to support both versions as well (see docs).
In my quick tests, the socket version took around 11ms, whereas the head request version was about 26ms in case of a successful and around 1ms and 3ms in case of an unsuccessful connection respectively (both cases without the use of a proxy). When using a proxy the latter case is of course slower and dependents on the proxy.
But because it says "low-latency" check in the comment, I was hesitant to replace the complete method with this head request variant.
I just added a line to the changelog, but I just saw that the CI failed and I'm not sure why.
But because it says "low-latency" check in the comment, I was hesitant to replace the complete method with this head request variant.
The "low latency" statement is a bit misleading, it is low latency only because of its default timeout of 2 sec. This is compared to requests from the sync threads which have a timeout of 100 sec. The reason for such a long timeout is to enable uploads / downloads even when the internet connection is flaky.
check_connection on the other hand is called every 10 sec. Depending on its return value:
If it returns True and sync threads were stopped from a previous ConnectionError, they are resumed.
If it returns False, the displayed status is changed to "Connecting...". However, sync threads are not stopped. Instead, they will raise a ConnectionError in their own time, from their own timeout value.
I therefore don't mind the slow overhead of requests vs socket if it results in simpler code.
I just added a line to the changelog, but I just saw that the CI failed and I'm not sure why.
The tests are currently a bit flaky. First, spawning a new process sometimes hangs when run from pytest on macOS (pytest meddles with stdin / stdout). Second, one of the sync engine tests sometimes fails due to a bug in the watchdog dependency which results in missing notifications for file metadata changes on macOS. It's already patched upstream and I expect a new release soon. Feel free to ignore those failures...
You should however pass linting. We use flake8 and black to enforce a uniform code style and mypy for static type checking.
Okay I just replaced the socket method completely, squashed the commits and checked for linting.
|
GITHUB_ARCHIVE
|
# Preprocess Oraby Fact-Feel into train, dev, and test tsv files.
import csv
import os
def process_raw(read_path: str, write_path: str):
"""
Read Oraby et al. (2015) in its raw format. Each document is a single file directory structure is splits and labels.
:read_path: The name of the top level directory the files are in.
:write_path: The directory to write the files to.
"""
for split_dir in os.listdir(read_path):
outf = open(os.path.join(write_path, f"Oraby_fact_feel_{split_dir}.tsv"), 'w', encoding = 'utf-8')
writer = csv.writer(outf, delimiter = '\t')
writer.writerow(["DocID", "Label", "Text"])
for label_dir in os.listdir(os.path.join(read_path, split_dir)):
label = label_dir
for f in os.listdir(os.path.join(read_path, split_dir, label_dir)):
idx = f.strip('.txt')
fh = open(os.path.join(read_path, split_dir, label_dir, f), 'r', encoding = 'latin-1')
contents = " ".join(fh.readlines()).replace('\n', ' ').replace('\r', ' ').rstrip()
writer.writerow([idx, label, contents])
|
STACK_EDU
|
Myeclipse error updating profile
The following are some of the build and run commands are available from the Run menu: The console window will appear notifying you about progress of execution. Deletes the contents of the nbbuild and nbdist folders. The project import does not impose changes on the Eclipse project structure, so it does not interfere with the way the project works in Eclipse. Button Uninstall is dimmed out.
However changes of java class will have to restart. The openmrs project is a parent project. If there are references in your project metadata to servers or other resources that NetBeans can not resolve, the node for the project will appear in red. Import failed due to Press OK on Preferences dialog. Here you can set up execution environment. I still see 3. I'm liking the edit screen command from the presentation mode. If you make a configuration change in the Eclipse project, you can resynchronize the NetBeans project. Now you select "OpenMRS" and run or debug it. WFS starts working if I restore the deleted files but I suspect it's using the old version, not the new one. Brek 01 Mar Thanks. This dialog box alerts you to a specific reference problem with one of your project libraries. So, when I tick wireframeSketcher then select next. By default, this file is called build. After you have completed the wizard and have closed any of the above informational dialog boxes, nodes for the projects will appear in the Projects window. Can you email me your log file? How to Debug Web Application Jetty plugin can pick up any changes of static resources, so changes of jsp, property or css files don't require a restart. Eclipse platform for project ProjectName cannot be used. But this directory does not exist. You can customize this script according to the needs of your project. More details can be found in IDE's log file. NetBeans will use the default platform. Runnign application in Eclipse is extreemly easy. Store NetBeans project data inside Eclipse project folders.
Verification is a call of contemplation import problems and its assignments. For more detox on behalf and leasing your application as well errod self the side untouched, see Creating, Myeclipse error updating profile, and Configuring Myecpipse Benefits. Address already in use", most pat the default sagacity is in use. Letter the beginning under the Maven Article section. Message OK on behalf dialog. Roll amount into Hello Lonesome!. The nightcap vista mine was displaying at only lighted the first line of opinion, and I didn't rest the approval scroll bar and people. If the classpath in the Essential has hit since you towards imported it, myeclipse error updating profile can use the Resynchronize November Hot sex big women feature to casual the updatinh in the unchanged NetBeans project. If you desided to every Single button on error dialog you will occur this setup and system will never ask you for this again. Also, it is better to slither NetBeans sole data inside Confident crimson folders. Speaking x86, bit Main:.
|
OPCFW_CODE
|
I am still new to Speckle and need some advice. I am involved in a huge number of projects with different project organisation, different stages and different architects where I need to extract data from architectural Revit models. Usually geometrical data to perform thermal simulations. The architects does not know me since I am not always directly involved in the projects, they do not know about Speckle and they do not aways understand what data I need. I now wonder if you have any idea how Speckle can be used without scaring or creating extra work for the architect in projects like this? Usually, asking the architect to send a gbXML or even a custom IFC is to difficult.
Thanks for prompting a good discussion topic! In general, we’ve seen two main ways in which Speckle gets adopted on projects:
- a top down one, where it’s “mandated” by project managers
- a bottom up one, where people see the value and therefore use Speckle
The first case is usually when PMs realize the amount of information they can easily gather and then leverage by continuously “harvesting” their models. They can consume it via systems like custom dashboards or PowerBi integrations (we have a bunch of these on our roadmap). Or simply they want to enable people like you to do their job better and faster!
The second is when the designers have interoperability or collaboration needs, or simply want to visualize and share their models online. We are also working on various new features such as comments that will enhance collaboration even more.
So ideally either of the above could be enough to encourage your architects to use Speckle. We’re also working to simplify a lot the process of adopting and using our connectors with features such as the 1-click send and scheduled sends, but if you find or think there might be other barriers to adoption, let us know, we’d be eager to hear from you!
Finally, you could also ask them to share an IFC with you and consume it in a similar way to how @m-clare did, but using files has its drawbacks https://twitter.com/eng_mclare/status/1504704071772475396
Thanks for a great reply. I think we will see the first way in many projects in the future but until then I think I will be the only person in the projects using Speckle and that this will be a strange and scary thing for many architects (installing unknown things on their computers and having to learn this just bacause of me). It is very similar to ask for gbXML, always a problem for the architect.
Therefore I think it is important to make it as easy for the architect as possible. Preferable I would like to be able to set up the entire data selection for the architect so it is just a press of a button. At least in Revit since I can do this in viewer mode. ArchiCAD will be more difficult since I can not open these files. An other problem is that the architect probably needs to add data, for axample rooms that covers the entire internal building volume. Not sure how to solve this, maybe running some script before sending the data since the architect probably does not want this information in the BIM-model.
|
OPCFW_CODE
|
first: The function you always missed in Python
first is an MIT-licensed Python package with a simple function that returns the first true value from an iterable, or
None if there is none.
If you need more power, you can also supply a
key function that is used to judge the truth value of the element or a
default value if
None doesn’t fit your use case.
N.B. I’m using the term “true” consistently with Python docs for
all() — it means that the value evaluates to true like:
It's short, it works.
If it breaks in a future Python release, I will fix it.
But other than that, no work on the project is intended.
A simple example to get started:
>>> from first import first >>> first([0, None, False, , (), 42]) 42
However, it’s especially useful for dealing with regular expressions in
import re from first import first re1 = re.compile('b(.*)') re2 = re.compile('a(.*)') m = first(regexp.match('abc') for regexp in [re1, re2]) if not m: print('no match!') elif m.re is re1: print('re1', m.group(1)) elif m.re is re2: print('re2', m.group(1))
key function gives you even more selection power.
If you want to return the first even number from a list, just do the following:
>>> from first import first >>> first([1, 1, 3, 4, 5], key=lambda x: x % 2 == 0) 4
default on the other hand allows you to specify a value that is returned if none of the elements is true:
>>> from first import first >>> first([0, None, False, , ()], default=42) 42
The package consists of one module consisting of one function:
from first import first first(iterable, default=None, key=None)
This function returns the first element of
iterable that is true if
If there is no true element, the value of
default is returned, which is
None by default.
If a callable is supplied in
key, the result of
key(element) is used to judge the truth value of the element, but the element itself is returned.
first has no dependencies and should work with any Python available.
first brings nothing to the table that wasn’t possible before.
However the existing solutions aren’t very idiomatic for such a common and simple problem.
The following constructs are equivalent to
first(seq) and work since Python 2.6:
next(itertools.ifilter(None, seq), None) next(itertools.ifilter(bool, seq), None) next((x for x in seq if x), None)
None of them is as pretty as I’d like them to be.
re example from above would look like the following:
next(itertools.ifilter(None, (regexp.match('abc') for regexp in [re1, re2])), None) next((regexp.match('abc') for regexp in [re1, re2] if regexp.match('abc')), None) next((match for match in itertools.imap( operator.methodcaller('match', 'abc'), [re1, re2]) if match), None)
Note that in the second case you have to call
The third example "fixes" that problem but also summons Cthulhu.
For comparison, one more time the
first(regexp.match('abc') for regexp in [re1, re2])
Idiomatic, clear and readable. Pythonic. :)
The idea for
first goes back to a discussion I had with Łukasz Langa about how the
re example above is painful in Python.
We figured such a function is missing Python, however it’s rather unlikely we’d get it in and even if, it wouldn’t get in before 3.4 anyway, which is years away as of yours truly is writing this.
So I decided to release it as a package for now. If it proves popular enough, it may even make it into Python’s stdlib in the end.
|
OPCFW_CODE
|
Thanks to our open source contributors, Layer5 projects are well represented at KubeCon. Attend our talks to help you get started with managing your own service mesh!
KubeCon + CloudNativeCon NA 2020 is here and Layer5 is in the spotlight!Service Mesh Specifications and Why They Matter in your Deployment
November 18, 2020As the ubiquity of service meshes unfolds so does the need for vendor and technology-agnostic interfaces to interact with them. The following three open specifications solving the challenge of interoperability, workload and performance management between service meshes: Service Mesh Performance (SMP) and Service Mesh Interface (SMI), Multi-Vendor Service Mesh Interoperation (Hamlet). Attend to learn what makes each of these specifications unique and why they are very much needed.
CNCF SIG Network: Intro and Deep-Dive
November 20, 2020It’s the network! is the cry of every system administrator, every developer. With the increased prevalence of microservice-based distributed systems, it’s true - networking as a discipline has never been more critical in the efficient operation of cloud native deployments. Networking primitives, including load balancing, observability, authentication, authorization, policy, rate limiting, QoS, mesh networks, legacy infrastructure bridging, and so on are now receiving substantial development and investment throughout the industry and are the subject of focus of the CNCF SIG Network.
See what the CNCF's Service Mesh Working Group has been up to. Join this talk for an intro to the SIG, its charter and a deeper discussion of current cloud native networking topics being advanced in this SIG. Current CNCF projects in-scope: Open Service Mesh, Kuma, Chaos Mesh, Ambassador, CNI, CoreDNS, Envoy, gRPC, Linkerd, NATS, Network Service Mesh.
Meet the Maintainers: Service Mesh Interface
November 17, 2020Join the Service Mesh Interface (SMI) and Meshery maintainers for a Meet the Maintainers office hours. Learn all about Service Mesh Interface and how Meshery is the service mesh manager to determine the conformance of all service meshes in or out of compliance with the specification.
We hope to see you at KubeCon + CloudNativeCon NA 2020. Be sure to say hello in the Layer5 Community!
MeshMap is here!
Design your deployments the way you want. Drag-and-drop your cloud native infrastructure using a pallete of thousands of versioned Kubernetes components. Say goodbye to YAML configurations. Have your cloud native deployments automatically diagrammed. Deployments configured and modeled in Designer mode, can be deployed into your environment and managed using Visualizer. Discover a catalog of best practice cloud native patterns
|
OPCFW_CODE
|
Suppose one has a microprocessor or microcontroller which is supposed to perform some action when it notices that a button is pushed.
A first approach is to have the program enter a loop which does nothing except look to see if the button has changed yet and, once it has, perform the required action.
A second approach in some cases would be to program the hardware to trigger an interrupt when the button is pushed, assuming the button is wired to an input that's wired so it can cause an interrupt.
A third approach is to configure a timer to interrupt the processor at some rate (say, 1000x/second) and have the handler for that interrupt check the state of the button and act upon it.
The first approach uses a busy-wait. It can offer very good response time to one particular stimulus, at the expense of totally tuning out everything else. The second approach uses event-triggered interrupt. It will often offer slightly slower response time than busy-waiting, but will allow the CPU to do other things while waiting for I/O. It may also allow the CPU to go into a low-power sleep mode until the button is pushed. The third approach will offer a response time that is far inferior to the other two, but will be usable even if the hardware would not allow an interrupt to be triggered by the button push.
In cases where rapid response is required, it will often be necessary to use either an event-triggered interrupt or a busy-wait. In many cases, however, a polled approach may be most practical. Hardware may not exist to support all the events one might be interested in, or the number of events one is interested in may substantially exceed the number of available interrupts. Further, it may be desirable for certain conditions to generate a delayed response. For example, suppose one wishes to count the number of times a switch is activated, subject to the following criteria:
- Every legitimate switch event will consist of an interval from 0 to 900us (microseconds) during which the switch may arbitrarily close and reopen, followed by an interval of at least 1.1ms during which the switch will remain closed, followed by an interval from 0 to 900us during which the switch may arbitrarily open and reclose, followed by an interval of which at least 1.1ms during which the switch will be open.
- Software must ignore the state of the switch for 950us after any non-ignored switch opening or closure.
- Software is allowed to arbitrarily count or ignore switch events which occur outside the above required blanking interval, but which last less than 1.1ms.
- The software's reported count must be valid within 1.99ms of the time the switch is stable "closed".
The easiest way to enforce this requirement is to observe the state of the switch 1,000x/second; if it is seen "closed" when the previous state was "open", increment the counter. Very simple and easy; even if the switch opens and closes in all sorts of weird ways, during the 900us preceding and following a real event, software won't care.
It would be possible to use a switch-input-triggered interrupt along with a timer to yield faster response to the switch input, while meeting the required blanking requirement. Initially, the input would be armed to trigger the next time the switch closes. Once the interrupt was triggered, software would disable it but set a timer to trigger an interrupt after 950us. Once that timer expired, it would trigger an interrupt which would arm the interrupt to fire the next time the switch is "open". That interrupt would in turn disable the switch interrupt and again set the timer for 950us, so the timer interrupt would again re-enable the switch interrupt. Sometimes this approach can be useful, but the software is a lot more complicated than the simple polled approach. When the timer-based approach will be sufficient, it is often preferable.
In systems that use a multitasking OS rather than direct interrupts, many of the same principles apply. Periodic I/O polling will waste some CPU time compared with having code which the OS won't run until certain events occur, but in many cases both the event response time and the amount of time wasted when no event occurs will be acceptable when using periodic polling. Indeed, in some buffered I/O situations, periodic polling might turn out to be quite efficient. For example, suppose one is receiving a large amount of data from a remote machine via serial port, at most 11,520 bytes will arrive per second, the device will send up to 2K of data ahead of the last acknowledged packet, and the serial port has a 4K input buffer. While one could process data using a "data received" event, it may be just as efficient to simply check the port 100x/second and process all packets received up to that point. Such polling would be a waste of time when the remote device wasn't sending data, but if incoming data was expected it may be more efficient to process it in chunks of roughly 1.15K than to process every little piece of incoming data as soon as it comes in.
|
OPCFW_CODE
|
I've got a machine running ESXi 4 with 5 VM's + 1 AcronisAppliance with vmProtect 7. Untill last week, everything worked fine. There was not a single problem with the backups. I Did some Windows Updates on the machines, and now the backup is failing. I'm not sure if the problem has anything to do with the Windows Backup, but the problems started at that day..
I've got a task that will backup the entire physical server, 5 VM's. Everything goes ok but when it starts to backup my SBS2011 server, it fails (See VM Log below). The SBS2011 server itself is throwing some warnings; The system failed to flush data to the transaction log. Corruption may occur. A few days ago it was throwing some errors that my Disk wasn't ready for acces yet.
What i've done so far:
- Completely updated SBS2011.
- Reboot the server (+3 times).
- Recreated the vmProtect task.
- Reïnstalled vmprotect.
- Rebooted my NAS (+3 times).
- Updated VMware tools (8.6.5).
When I disable the VMware Tools Service, the backup will succeed with a warning that my backup isn't consistent. When I enable the service again, it fails.
Is there anything I can try/do to fix this problem?
Thank you very much in advance,
Failed to perform the requested operation.
Error code: 13
Message: Failed to perform the requested operation.
Error code: 103
Message: Failed to open the virtual machine ([Store-raid5] SBS2011/SBS2011.vmx).
Error code: 64
Message: VMware error: 'An internal error.'.
Error code: 52
Message: Awaiting task 'CreateSnapshot' has failed. Reason: An error occurred while quiescing the virtual machine. See the virtual machine's event log for details..
Uninstall VMware Tools in your VM and try to install it in custom. Unselect " Volume Shadow Copy Service " or VSS and then install VMware Tools.
You can also use another backup tools such as Veeam Backup & Replication Free Edition.
Try it and let me know any issue further.
I've tried this option. Now I see the same warning when disabling the VMware tools service > Inconsistent. This is giving me a false feeling of having a good backup.
Isn't there a fix for this problem?
you have snapshot of that VM or not? Is it important ro not? if it isn't important, you can delete it and try again to take backup.
You can delete snapshot from Snapshot Manager. In VI, right click on the VM and in snapshot menu click Snapshot Manager.
Hope this resolve problem.
At this moment this virtual machine doesn't have a single Snapshot.
I'm currently doing backups with the VMware Tool Service disabled. It works, but the backup is incosistent so I don't trust it.
Thank you for using Acronis software.
My name is Anton and I'm writing you on behalf of Acronis Customer Central.
Please let me know if this solution from our Knowledge Base works for you - http://kb.acronis.com/content/4559
Let me know if there is anything we also can do for you please.
Acronis Customer Central
Uninstalling/Reinstalling VMWare Tools isn't that simple since we are in an operational environment. We decided to wait for an VMWare Tools update and are hoping that this will solve this problem.
|
OPCFW_CODE
|
The last post, How a PMO can use cost trend analysis to identify project budget under and overruns, it covered how using cost trend analysis can help a PMO quickly identify where there may be budget issues. This post will cover a simple approach that can be used to conduct this analysis.
The objective of conducting this analysis is to identify where a project will under or overrun budget. While a project manager should be completing budget reviews and re-forecasts on a regular basis, unfortunately this is not always the case. This is made worse as there is a reluctance of project managers to give up excess budget “just in case” they need it.
- List of projects
- Budget for each project
- Previous months Year to Date (YTD) Actuals for each project
- Current months YTD Actuals for each project
- Estimate to complete (ETC) for each project
This data should be produced by the project on a monthly basis so hopefully should not present an issue.
Straight Line Cost Trend Analysis
This is a very simple approach. Take the current months YTD Actuals, then calculate the numerical value for the current month:
- January = 1
- February = 2
- March = 3
- December = 12
Divide the current months YTD Actuals by the numerical value for the current month. So if you were using the YTD Actuals for April, you would divide by 4. Then multiply the result by 12 to give the annualised forecast. The forecast can then be compared against the original budget or the revised forecast from the project manager. Note: while a simple approach, most projects ramp up monthly spend over time. Therefore, the approach can give lower predictions where a project has only recently started. It should work well for stable multi year projects that have small changes in monthly spend.
Simple Weighted Cost Trend Analysis
This approach addresses the issue where a project will ramp up monthly costs during mobilisation by using the latest months actuals to calculate forecast future spend. To do this, you need the current months actuals. If this is not available it can be calculated by taking the current months YTD Actuals minus the previous months YTD Actuals. Then calculate the numerical value for the current month (in the same way as used in the Simple Cost Trend Analysis). Calculate the remaining months in the year. So for April you will calculate 12 minus 4 to give 8. Then calculate current month Actuals (not YTD) multiplied by remaining months in year. When complete add the current month YTD Actuals. This will give you a forecast that includes the YTD Actuals and then an estimate weighted on current month actuals. The logic being that future months are more likely to be similar to the most recent months spend. Note: be careful that latest month actuals is representative and does not include large “one-off’s” i.e. hardware, infrastructure purchasers, etc. This will distort the numbers. You may wish to take these out of the monthly calculation and add back at the end.
When the results have been calculated, the forecast can then be compared against the latest forecast from the project manager. This will highlight potential under or overruns. The PMO should then meet with the project manager to understand their forecast and to review if there is a credible plan. If there is no clear reason to support the forecast from the project, the PMO should encourage for the estimate to be reviewed and updated. This will ensure that overruns are identified early, remediation action to be taken and avoid budget shocks to the sponsor. If there are significant increases, make sure the business case still makes sense and if not, consider stopping the project. If the review shows an under run, push the project to release funds so that they can be used to fund other opportunities.
Below is an example of a Cost Trend Analysis template that calculates both the straight line and weighted forecasts. This template is available as a free download. Simply click on the link below the picture to download. Click to download PMO Cost Trend Analysis Template
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Text;
using WebMagicSharp;
namespace WebMagicSharp.Examples
{
public class ZhihuPageProcessor : IPageProcessor
{
private Site site = Site.Me.SetRetryTimes(3).SetSleepTime(1000);
public Site GetSite()
{
return site;
}
public void Process(Page page)
{
page.AddTargetRequests(page.GetHtml().Links().Regex("https://www\\.zhihu\\.com/question/\\d+/answer/\\d+.*").All());
page.PutField("title", page.GetHtml().Xpath("//h1[@class='QuestionHeader-title']/text()").ToString());
page.PutField("question", page.GetHtml().Xpath("//div[@class='QuestionRichText']//tidyText()").ToString());
page.PutField("answer", page.GetHtml().Xpath("//div[@class='QuestionAnswer-content']/tidyText()").ToString());
if (page.GetResultItems().Get("title") == null)
{
//skip this page
page.SetSkip(true);
}
}
}
}
|
STACK_EDU
|
Canvas VTN v3 Release
Version 3 of Canvas makes it easier to install and use, both as a standalone OpenADR VTN or embedded into a broader DERMS platform
3 minute read ∙ Aug 12th, 2021
Version 3 of Canvas is now available! This release includes new features and enhancements to make Canvas easier to install and use, plus a number of general improvements that we have been making to keep up to date with how OpenADR is being used across many scenarios.
Canvas is a pure-play OpenADR VTN that abstracts away the complexities of the protocol and lets you get started with OpenADR as seamlessly as possible. Since it implements minimal business logic itself, it is not meant to replace complete DERMS (or similar) platforms but rather be the OpenADR interface for any range of programs and products.
Companies run Canvas in their environment, so they maintain control over security, uptime, etc. It can run standalone and be controlled via the UI, or embedded in a broader DERMS platform and controlled programmatically via a custom adapter, which GridFabric will help you develop. Typically, users will end up using a combination of the two, e.g. use a custom adapter to execute programmatic requirements like creating events and reading report data, and use the UI to monitor and test.
We also offer a hosted version that we call Canvas Cloud. The goal of Canvas Cloud is to provide the simplest and lowest cost way to use an OpenADR VTN. You can connect a certified OpenADR VEN to Canvas Cloud and use it to test and demonstrate OpenADR functionality. If you would like a free trial of Canvas Cloud to see if it suits your needs, please contact us.
Improvements for V3
We have added binary releases that include the necessary dependencies for a couple of common environments (Ubuntu 18.04 and Ubuntu 20.04) to make the installation and upgrading processes much faster. Releases are OS specific, so let us know if you need a release for an OS that isn't currently covered.
Easier programmatic integration
We created a sample adapter that can be installed and used as-is for simple integrations and, more likely, extended for more complex integrations and performance enhancements. It exposes some API endpoints for interfacing with VENs and Events, and implements both Basic Auth and JWT-based Authentication.
As Canvas users get beyond the basics into more custom production implementations, GridFabric will help develop a custom adapter that provides the functionality you need and supports performance requirements. We will work with you to scope and execute custom adapter development and provide ongoing support.
The first thing you may notice is the design of the UI has completely changed.
Navigating around the UI has become easier with a new menu bar design, bread crumbs and some unused or older feature removed. In addition, a host of minor improvements, bug fixes, will make using the UI a smoother experience.
V3 includes a new feature that logs all communications between VEN and VTN and displays them in the UI. Seeing what the VEN is sending is useful for debugging interconnection issues with VENs, and can save Canvas users a lot of time.
There are a few options for filtering logs so that the communications log doesn't take up more space than it needs to. Logging can be turned off completely, and polls and/or responses can be filtered out.
Previous Canvas documentation was embedded in the source code. We have moved it onto the web so it can be accessed from anywhere and have made some updates for clarity. It can be found at https://canvas-docs.gridfabric.io/.
OpenADR's use is increasing in demand response programs across North America and the world. Practitioners need to implement OpenADR easily so they can focus on what they do best. With the release of Canvas V3, we have the simplest, most usable and extensible OpenADR VTN even simpler, more usable and extensible! Please contact us if you'd like to try it out or learn more.
Interested in learning more?
Sign up for our quarterly(ish) newsletter with industry & protocol news, commentary and product updates.
Or if you'd like to discuss, contact us
|
OPCFW_CODE
|
It depends on the class really, some classes have higher final DMG in their skills whole other have none or not much.
I don't know all the skill of WH but yea having 0& final DMG is normal.
As a BaM, I have 0% final DMG as well and it only goes up to 10% while I boss with debuff aura.
It is a bit staggering when I hear my friends say they have 80%+ final DMG cuz I'm just sitting here with 0% , I don't really mind it TBH.
Each final damage source is multiplicative with itself and with %damage and does not suffer from diminishing return no matter how high or how much different sources that give %final damage you get.
Where as %damage suffer from diminishing return due to it stacking additively with %boss and itself.
Weapon multiplier * (primary * 4 + secondary stat) * (att/100) * (1 + %damage + %damage vs boss) * (1 + %final damage source 1) * (1 + %final damage source 2) * (1 + %final damage source 3) * (1 + %final damage source 4) * ...
Final damage multiplies directly into your display numbers. There are no diminishing returns to final damage and final damage multiplies every stat including other final damage sources. Damage % aka Total damage multiplies every other stat except other total damage AND boss damage sources. They are additive in this case. This means the more boss% you have or more damage% you have the less bonus you will gain from damage% or boss damage%. In lower levels of funding, the difference is negligible, but in higher levels of funding, the difference is huge in terms of scaling (20% final damage beats a bonus of 40% total damage by a good amount). If I have 40% total damage and 80% boss damage, it becomes 120% overall damage (2.2x multiplier to other sources of damage). If I have 40% final damage and 80%final damage, I will have 152% overall final damage (2.52x multiplier) instead of 120% (2.2x multiplier) since final damage is multiplicative to all sources including other final damage and is not additive unlike damage%. It would not only have a higher multiplicative value (2.52x > 2.2x) but because it multiplies with everything, it will have a constant multiplier whereas total damage begins to scale off the more damage% or boss% you acquire, which results in a bigger decrease the higher up in the funding tier you go up.
TLDR: Final Damage is better than damage% in every possible way. Whereas you can have too much total damage due to diminishing returns, you will can never have too much final damage% because its multiplicative to everything
@kaxi: Also only some classes have buffs or passives that give final damage like mages, dual blade, phantom, drk, etc. Cross surge's description would be something like "improves final damage by 80%". The skill descriptions enunciate on whether its actually final damage or just regular ol damage%. Final damage is the best modifier to damage but also the rarest, aside from cores there is no way to attain final damage anywhere else. Cores, I believe, only give your attack skills final damage bonuses but not final damage buffs so yeah.. Either your class has final damage passives or buffs or they don't. Not everyone gets those broken multipliers and for a good reason (since those classes are pretty weak dps wise without those final damage modifiers)
|
OPCFW_CODE
|
10-04-2011 01:26 PM
i'm running domino 8.5.2, BE 2010 R2, Version 13.0 Rev 4164 (64-bit) and the Domino agent for BE
Now that I have many databases 'daosified', my backup performance for my NSF files has dropped from 1200MB/min to 800MB/min. Our backups were running at 23+ hours and jumped to 32+hrs. Now that we've saved over 600GB of disk space, courtesy of DAOS, we are back under 24 hrs. We anticipated being under/at 17 hours for a backup after saving 600 GB.
Has anyone seen this type of performance loss? Is there any way we can get back some/most/all of our lost performance?
It seems more than a little foolish to have saved 600GB in disk space (which was our constraint to adding more users to the server) to now be constrained by the backup (and still unable to add more users to the server).
10-04-2011 02:40 PM
Speed primarily has to do with the source speed and how fast it can "read," data, and the target speed, how fast you can write the data. Of course there are other factors too like CPU, bus speeds, bandwidth, etc...
WIth DAOS, you can compress the Db, files, and even enable single instance storage.
Does the backup cause Lotus to decompress the Db? Perhaps this is the cause?
Does the separation of files from the Db, affect performance? e.g. having to backup now thousands of small files and not just a large Db?
I guess we need a better understanding, in what was enabled for DAOS, but also how BackupExec calls the API to backup the data.. (We'll need Symantec support to chime in here)
10-04-2011 03:09 PM
Don't have a clue, but I have set the support flag to draw the attention of the Symantec folks
10-24-2011 02:57 PM
the short answer that I got from Symantec is that, unless you disable all of the DAOS features (through the registry) during the backup, this is the performance loss you'll get once you have DAOS installed and you've 'daosified' your databases. Between the overhead of cross-referencing every ticket with the appropriate NLO file, AND the hundreds of thousands of small NLO files, you get lousy backup performance.
AND, just to add icing to the cake, when i asked about disabling the Lotus Domino Server agent within BE and just backing up my Domino server while it is offline (with the right hardware, that would take about 2 hours), I was told that BE won't let me do a volume backup on a Domino server without renaming a DLL to completely disable the BE Domino Agent. Essentially, I may need to look for another backup solution if I want to do a Volume restore...
10-24-2011 05:50 PM
If you just want to backup the entire volume where the Domino files are, you don't have to disable the Domino agent. You just de-select everything from under the Domino Databases section and select everything from the volume, e.g. C:\. However, you have to disable AFE (see the document below).
Otherwise, the .nsf files would automatically be excluded.
Remember to stop your Domino service first before you backup your entire volume, or else, when you restore the .nsf files, they might be unusable.
Note that this method of backing up Domino is not recommended. The recommended method is still to backup Domino using the agent.
You might want to look at Symantec System Recovery (SSR) which does image backups and restores.
|
OPCFW_CODE
|
how to understand coxeter groups geometrically
I keep reading in the literature "Let $X$ be a Coxeter Group" but I can't think of any examples. I know they arise as Weyl groups, there are affine-Weyl groups ones as well. This list is not exhaustive.
What about examples from Kac Moody algebras. How do I understand those geometrically?
I'm not sure I understand how to reconcile "I can't think of any examples" with "Weyl groups."
don't laugh ...
For an alternative geometric point of view, I think it's worth mentioning the Davis--Moussong complex. See Mike Davis' book The Geometry and Topology of Coxeter Groups for details.
See Humphreys, starting from II.5.3. Given a Coxeter system $(W, S)$ let $V$ be the free vector space on symbols $\alpha_s, s \in S$. We define a bilinear form on $V$ by extending
$$B(\alpha_s, \alpha_t) = - \cos \frac{\pi}{m(s, t)}$$
linearly, and we define a canonical linear representation $\sigma : W \to \text{GL}(V)$ respecting $B$ by extending
$$\sigma_s \lambda = \lambda - 2 B(\alpha_s, \lambda) \alpha_s$$
multiplicatively. This representation is faithful (II.5.4), so we can study $W$ using it. The most familiar case is when $B$ is positive-definite, and this occurs if and only if $W$ is finite (II.6.4). Cases where $B$ is positive-semidefinite include the affine Weyl group case, and in general what kind of geometry we get depends on the signature of $B$ (see for example II.6.8).
Alternately, the geometric pictures are perhaps clearest for Coxeter systems of rank $3$, where the corresponding groups are essentially the triangle groups. These can be understood as symmetries of tilings of the sphere, the Euclidean plane, or the hyperbolic plane by triangles depending on the signature of $B$. This picture should be compatible with the one above (I think one just needs to consider the action of $W$ on the "unit ball" in $V$) but I haven't checked the details.
Your statement about the unit ball is correct modulo the fact that in the Euclidean and hyperbolic cases, the unit ball has two connected components, and you should only look at one of them (you can see that all the reflections preserve the components, since they fix the hyperplane where $B(\alpha_s)$ vanishes).
You will find pretty much everything you need in Ken Brown's Buildings: Theory and Applications (with coauther Peter Abramenko). It is all about coxeter groups/complexes and its use in the geometric theory of Buildings. For instance, chapter 2 (which is entitled 'Coxeter Groups') talks about the Canonical Linear Representation (given in the other answer on this thread), chapter 10 talks specifically about Euclidean and Hyperbolic coxeter groups, and chapter 8 contains a section on groups of Kac-Moody type. The book itself seems to be the bible of Buildings, but I may be biased since Ken himself introduced me to it.
|
STACK_EXCHANGE
|
How do you implement a ManagedServiceFactory in OSGi?
Im currently trying to setup my own implementation of a ManagedServiceFactory. Here is what I'm trying to do: I need multiple instances of some service on a per-configuration base. With DS the components worked perfectly but now I found out that these services should handle there own lifecycle (i.e. (de)registration at the service registry) depending on the availability of some external resource, which is impossible with DS.
Thus my idea was to create a ManagedServiceFactory, which then would receive configs from the ConfigurationAdmin and create instances of my class. These again would try to connect to the resource in a seperate thread and register themselves as service when they're ready to operate.
Since I had no luck implementing this yet, I tried to break everything down to the most basic parts, not even dealing with the dynamic (de)registration, just trying to get the ManagedServiceFacotry to work:
package my.project.factory;
import java.util.Dictionary;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Map;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.Constants;
import org.osgi.framework.ServiceRegistration;
import org.osgi.service.cm.ConfigurationException;
import org.osgi.service.cm.ManagedServiceFactory;
public class Factory implements BundleActivator, ManagedServiceFactory {
private ServiceRegistration myReg;
private BundleContext ctx;
private Map<String, ServiceRegistration> services;
@Override
public void start(BundleContext context) throws Exception {
System.out.println("starting factory...");
this.ctx = context;
java.util.Dictionary properties = new Hashtable<String, Object>();
properties.put(Constants.SERVICE_PID, "my.project.servicefactory");
myReg = context.registerService(ManagedServiceFactory.class, this,
properties);
System.out.println("registered as ManagedServiceFactory");
services = new HashMap<String, ServiceRegistration>();
}
@Override
public void stop(BundleContext context) throws Exception {
for(ServiceRegistration reg : services.values()) {
System.out.println("deregister " + reg);
reg.unregister();
}
if(myReg != null) {
myReg.unregister();
} else {
System.out.println("my service registration as already null " +
"(although it shouldn't)!");
}
}
@Override
public String getName() {
System.out.println("returning facotry name");
return "ServiceFactory";
}
@Override
public void updated(String pid, Dictionary properties)
throws ConfigurationException {
System.out.println("retrieved update for pid " + pid);
ServiceRegistration reg = services.get(pid);
if (reg == null) {
services.put(pid, ctx.registerService(ServiceInterface.class,
new Service(), properties));
} else {
// i should do some update here
}
}
@Override
public void deleted(String pid) {
ServiceRegistration reg = services.get(pid);
if (reg != null) {
reg.unregister();
}
}
}
Now, it should receive configurations from the ConfigurationAdmin for PID my.project.servicefactory, shouldn't it?
But it does not receive any configurations from the ConfigurationAdmin. The bundle is started, the service is registered and in the web console, I can see the config admin holds a reference to my ManagedServiceFactory. Is there a certain property which should be set? The interface specification does not suggest that. Actually my implementation is more or less the same as the example there. I've no idea what I'm doing wrong here, any pointers to the solutions are very welcome.
Also, I orginally thought to implement the ManagedServiceFactory itself as DS, which also should be possible, but I failed at the same point: no configurations are handed over by the ConfigAdmin.
update
To clarify the question: I think that this is mainly an configuration problem. As I see it, I should be able to specify two PIDs for the factory, one which identifies a configuration for the factory itself (if any), and one which would produce services trough this factory, which I thought should be the factory.pid. But the framework constants do not hold anything like this.
update 2
After searching a bit the Felix Fileinstall source code, I found out that it treats configuration files differently when there is a - in the filename or not. Having the configuration file named my.project.servicefactory.cfg it did not work, but the configs named my.project.servicefactory-foo.cfg and my.project.servicefactory-bar.cfg were properly handed over to my ManagedServiceFactory as expected, and multiple services with ServiceInterface were registered. Hurray!
update 3
As proposed by Neil, I put the declarative service part in a new question to bound the scope of this one.
Though not answering your question directly, the last time I wrestled with ManagedServiceFactory I took Neil Bartlett's advice and used DS components instead, as mentioned here: http://stackoverflow.com/a/4129464/31818
As I mentioned above, DS will does not work for components (services) that need to manage their own lifecycle. DS gives you no means to control the service (un)registering process but instead manages the lifecycle for you (which is, in most cases, a positive thing!)
Ah, I had missed that requirement when reading your question earlier. It sounds, then, like the presence of your services is contingent on two things: a configuration being assigned, and some external resource being available. I will trust that you know your requirements better than any of the rest of us.
Two questions: (1) Does Config Admin actually have a configuration record with a PID that matches your MSF? (2) What is ServiceConstants.PID, is this a constant you have defined yourself? Since you have not listed it explicitly I cannot verify that it is correct... why not use Constants.SERVICE_PID from the org.osgi.framework package?
Yes, the CA has a config for pid my.project.servicefactory. 2) ServiceConstants was something from Pax, that was wrong, I updated it above to Constants.SERVICE_PID, but without any effect
I read your second update. I can help you but this thread is getting really confusing. PLEASE post the question about DS separately.
I think that the problem is you have a singleton configuration record rather than a factory record. You need to call Config Admin with the createFactoryConfiguration method using my.project.servicefactory as the factoryPid.
If you are using Apache Felix FileInstall (which is a nice easy way to create config records without writing code) then you need to create a file called my.project.servicefactory-1.cfg in the load directory. You can create further configurations with the same factoryPID by calling them my.project.servicefactory-2.cfg etc.
Exactly, this was the problem. I just updated my own post while you were answering, because I'd rather like to solve this with DS. Thanks!
I think you should open a new question on StackOverflow and try to keep the details of what you're trying to do with DS separate from the MSF stuff. Otherwise this thread will just get really confusing.
|
STACK_EXCHANGE
|
Inforce 6501 Micro SOM Beats Five Other Vendor Products in a Bake-off
[caption id="attachment_355" align="alignleft" width="332"] Figure-1: NASA's Astrobee free-flying robot headed for the ISS in 2017.[/caption] Designing a next generation free-flying autonomous robot [see Figure 1], scheduled to be deployed on the International Space Station (ISS) in 2017, is no trivial task. The NASA Astrobee will serve as a robotic assistant to offload routine, repetitive, but long-duration and CPU-intensive tasks and replace a legacy and older free-flying robot. Challenges in building the Astrobee avionics [caption id="attachment_356" align="alignright" width="374"] Figure-2: The Astrobee's avionics block diagram showing the three compute modules.[/caption] The Astrobee has subsystems for structure, propulsion, power, guidance, navigation and control (GN&C), command and data handling (C&DH), thermal control, communications, docking mechanism, and a perching arm . As seen in the block diagram in Figure-2 , the avionics provides computation and communication resources for the Astrobee. The three compute platforms are the low- [LLP], mid- [MLP], and high-level-processor [HLP], which are configured to perform specific functions. The bake-off to pick the right compute platform Choosing an appropriate compute platform is the most critical part of designing the avionics. Particularly since the new stuff will replace an ageing workhorse in the ISS, it must be upgradeable, and reliably serve for several years ahead. So the engineers at NASA Ames ran a thorough analysis that compared commercial compute platforms (SOMs and SBCs) from six different vendors. The following were their requirements:
- Compute power: The MLP runs vision-based mapping and navigation algorithms, which requires high-end processing capability.
- Software development cost: The MLP runs the Robot Operating System (ROS) package on a Linux OS and hence compute modules that support this requirement along with appropriate device drivers was critical.
- Hardware development cost: Off-the-shelf compute platforms that offer plug-and-play capabilities get better scores.
- Modularity: Crews on the ISS should be able to swap these modules effortlessly should a failure occur or an upgrade is required. SOMs with edge connectors got a higher score since no wires and nuts/bolts needed to be operated.
- Connectivity: Ethernet, I2C, SPI, USB 2.0, and WiFi are basic requirements for the MLP to communicate with the LLP, HLP, and external devices.
- Power consumption: Depending on use-cases, the Astrobee must be able to run on batteries upwards of 9 hours. Lower the power consumption, better the score.
- Qualcomm Snapdragon 805 processor (APQ8084) with quad-core 2.7GHz CPUs and heterogeneous compute that includes Hexagon DSP, Adreno GPU and dual ISPs enabling streaming of 1080p HD and H.264 encoded video to the ground
- Support for Linux (Ubuntu) and ROS--this is the first known port of ROS on the Qualcomm Snapdragon 805 processor (APQ8084) platform.
- Readily available camera (4K HD capable ACC-1H30) and display (4” ACC-1B10) accessories and associated device drivers to run on an Android OS (for the HLP).
- Tiny form-factor (28mm x 50mm) and cross-compatibility with a common carrier board design makes it easily swappable and upgradeable to newer Snapdragon modules in the future.
|
OPCFW_CODE
|
运行python main_test.py的问题
当我运行python main_test.py --model_type 3class的时候,出现下面的问题:
/mnt/f/code/PointNetGPD/PointNetGPD
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'torch.nn.parallel.data_parallel.DataParallel' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'model.pointnet.PointNetCls' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'model.pointnet.PointNetfeat' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'model.pointnet.STN3d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/torch/serialization.py:868: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
load model ../data/pointnetgpd_3class.model
voting: [tensor([0]), tensor([0]), tensor([0]), tensor([0]), tensor([0]), tensor([0]), tensor([0]), tensor([0]), tensor([0]), tensor([0])]
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/scipy/stats/stats.py:249: FutureWarning: The input object of type 'Tensor' is an array-like implementing one of the corresponding protocols (__array__, __array_interface__ or __array_struct__); but not a sequence (or 0-D). In the future, this object will be coerced as if it was first converted using np.array(obj). To retain the old behaviour, you have to either modify the type 'Tensor', or assign to an empty array created with np.empty(correct_shape, dtype=object).
a = np.asarray(a)
/home/xiaofeisong/anaconda3/envs/pointnetgpd/lib/python3.7/site-packages/scipy/stats/stats.py:249: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
a = np.asarray(a)
Test result: tensor([0])
我改过一些代码,请问下有谁知道这是什么原因导致的,应该怎么解决呢?
,导致这类具体报错的问题往往是多种多样的,建议还是从仔细梳理自己修改的代码,加深对原作者代码的理解上入手。
|
GITHUB_ARCHIVE
|
import os
import re
# myDirectory - where you put those output files
myDirectory = "./"
fileNum = 0
delay = 0
percentage = 0.0
for file in os.listdir(myDirectory):
# replace .txt to the file extension of your file
if file.endswith(".txt"):
fileNum += 1
print str(fileNum) + ":" + str(file)
fileContent = open(myDirectory+file)
fileContent = fileContent.read()
if len(fileContent) == 0:
print "Empty file!!! \n"
continue
fileContent = fileContent.split('\n')
regex = 's/sec(.*?)ms.*\((.*?)\)'
pattern = re.compile(regex)
for row in fileContent:
results = re.findall(pattern, row)
if len(results) > 0:
firstMS = results[0][0].strip(' ')
percentage = float (percentage) + float (results[0][1].strip(' ').strip('%'))
print firstMS + ' : ' + str(percentage)
delay = float (delay) + float (firstMS)
print 'number of file:' + str(fileNum)
delay = delay/fileNum
#percentage = (percentage - 4*26)/4.0
percentage = (percentage - 26*fileNum)/fileNum
print 'Average Delay is :' + str(delay)
print 'Average Packet Loss is: ' + str(percentage)
|
STACK_EDU
|
what i need is when enter some values(varchar ) in txt 1 and 2
it should be added in database and shown in gridview1 after i press butt1
any bodyknows full code answer/akdj
View Complete Post
I want to use gridview and bind the data and after i want insert the data in database using jquery,
below the link i got for delete
but i want the sample example for insert row from gridivew and add the database.
hi i am having a gridview on each row i am having a update and add button.
on click of either of the button i am opening a popup where the end user will enter data and on clck of ok button in the child page
based on the click ie
if i clicked update i need to update that particular row with the values from child page.
if i clicked on add i need to add a row below the row where i clicked add button .
can anyone help me with this?
I have three column comes from database table(Table1) and one column is for entering data for the user let us consider it look like this
ItemNo ItemName Qty uservalue
001 A 50 20
002 A 20 5
003 B 50
004 C 60 10
005 D 40
006 E 90 15
If its 300 row from database to gridview it will show 50 record with page navigation..
What i need is i need to insert data in to database table which gridview row has user value..
The output of the another table(Table2) will look like this after insert
ItemNo ItemName uservalue
001 A 20
002 A 5
HI and thanks in advance. asp.net using vb, using a codebehind page.
I have a relatively simple gridview bound to an product table that displays great. The problem comes when I choose to edit a row. When I do so, the gridviews' RowDataBound event fires and the values of the text properties of the tablecells in the row evaluate to "" emptystring.
I do notice that if I break within the GridView1_RowDataBound event during a refresh of the page, the values are there. But again, when I hit the "Edit" button in the first column (the CommandField column) . Text properties of the cells resolve to ""
Here is the code for the event, followed by the gridview in the form (inside an updatepanel)
Protected Sub GridView1_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles GridView1.RowDataBound If e.Row.RowType = DataControlRowType.DataRow Then ' not a header, footer row.. etc. Dim cellProductNumber As TableCell = e.Row.Cells(3) ' ProductNumber column
'>>>>> after clicking "update", even though a value displays in the gridview, ' the cellProductNumber.Te
I have a problem that can't find a way to do it, I have a gridview with this columns:
ID | AreaName | CostCenter
with the "Enable Editing" option check and what I want to do is to get the values of areaname and costcenter and assign them to a label control so I can use the values for something else, for example:
ID AreaName CostCenter
1 Multiconector 1025
2 One Conector 2065
3 Plug & Play 1445
|
OPCFW_CODE
|
/*
* =====================================================================================
*
* Filename: segment.cc
*
* Description: some segment algorithms
*
* Version: 1.0
* Created: 01/22/2015 09:43:30 AM
* Revision: none
* Compiler: gcc
*
* Author: (Qi Liu), liuqi.edward@gmail.com
* Organization: antq.com
*
* =====================================================================================
*/
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <time.h>
#include <float.h>
#include <cmath>
const int32_t CLOCKWISE_SIGNAL = 1;
const int32_t COUNTER_CLOCKWISE_SIGNAL = -1;
const int32_t COLLINEAR = 0;
#define SAME_SIGN(x, y) (((x) > 0) ^ ((y) < 0))
typedef struct _Point{
double x;
double y;
}Point_t;
typedef struct _Line{
Point_t start;
Point_t end;
}Line_t;
typedef struct _CrossPoint{
Point_t point;
double cross_mult;
}CrossPoint_t;
typedef struct _PointArray{
Point_t* points;
int32_t size;
}PointArray_t;
double Distance(Point_t p1, Point_t p2)
{
return sqrt(pow(p1.x-p2.x, 2)+pow(p1.y-p2.y, 2));
}
double CrossMultiply(Point_t p1, Point_t p2, Point_t p0)
{
return (p1.x-p0.x)*(p2.y-p0.y) - (p2.x-p0.x)*(p1.y-p0.y);
}
int ComparePolarAngel(const void* first,const void* second)
{
int ret = 0;
CrossPoint_t* cp1 = (CrossPoint_t*)first;
CrossPoint_t* cp2 = (CrossPoint_t*)second;
if(cp1->cross_mult == cp2->cross_mult)
ret = 0;
else if(cp1->cross_mult == 0)
return -1;
else if(cp2->cross_mult == 0)
return 1;
else if((cp1->cross_mult > 0 && cp2->cross_mult > 0) ||
(cp1->cross_mult < 0 && cp2->cross_mult < 0))
{
if(cp1->cross_mult < cp2->cross_mult)
ret = -1;
else
ret = 1;
}
else if(cp1->cross_mult < 0)
ret = 1;
else
ret = -1;
return ret;
}
/*
* Sort points by polar angel related to points[0]
* If more than one point has same polar angel,
* remove all but the one that is farthest from points[0]
*/
PointArray_t SortPointsByPolarAngel(Point_t* points, int32_t size)
{
int32_t i, j;
double x;
CrossPoint_t* cpoints = (CrossPoint_t*)malloc(size*sizeof(CrossPoint_t));
PointArray_t ps;
ps.points = (Point_t*)malloc(size*sizeof(Point_t));
for(i = 1; i < size; ++i)
{
x = points[i].x - points[0].x;
cpoints[i].point = points[i];
if(x == 0)
cpoints[i].cross_mult = DBL_MAX;
else
cpoints[i].cross_mult = (points[i].y - points[0].y)/x;
}
qsort(cpoints+1, size-1, sizeof(CrossPoint_t), ComparePolarAngel);
ps.points[0] = points[0];
ps.points[1] = cpoints[1].point;
for(i = 2, j = 1; i < size; ++i)
{
if(cpoints[i].cross_mult == cpoints[j].cross_mult)
{
if(Distance(points[0], ps.points[j]) < Distance(points[0],
cpoints[i].point))
ps.points[j] = cpoints[i].point;
}
else
ps.points[++j] = cpoints[i].point;
}
ps.size = j+1;
free(cpoints);
return ps;
}
/*
* Find the left-bottom point of the points array
*/
void FindLeftBottomPoint(Point_t* points, int32_t size)
{
int i;
Point_t p = points[0];
for(i = 1; i < size; ++i)
{
if((points[i].y < p.y) || (points[i].y == p.y && points[i].x < p.x))
{
p = points[i];
points[i] = points[0];
points[0] = p;
}
}
}
/*
* Judge the direction of second line related to first line
* @RET
* 1 for clockwise
* -1 for counter clockwise
* 0 for collinear
*/
int32_t JudgeDirection(const Point_t origin, const Point_t first, const Point_t second)
{
double vx = (first.x - origin.x) * (second.y - origin.y);
double vy = (first.y - origin.y) * (second.x - origin.x);
vx -= vy;
if(vx < 0)
return CLOCKWISE_SIGNAL;
else if(vx > 0)
return COUNTER_CLOCKWISE_SIGNAL;
else
return COLLINEAR;
}
/*
* Graham scan algorithm for finding the convex hull
*/
void GrahamScan(Point_t* points, int32_t size, PointArray_t* stack)
{
if(points == NULL || size <=0 || stack == NULL)
return;
int i, top, new_size;
PointArray_t points_array;
Point_t* rpoints;
FindLeftBottomPoint(points, size);
points_array = SortPointsByPolarAngel(points, size);
rpoints = points_array.points;
new_size = points_array.size;
if(new_size < 3)
{
free(rpoints);
return;
}
top = 0;
stack->points[top++] = rpoints[0];
stack->points[top++] = rpoints[1];
stack->points[top++] = rpoints[2];
for(i = 3; i < new_size; ++i)
{
if(JudgeDirection(stack->points[top-2], stack->points[top-1],
rpoints[i]) != COUNTER_CLOCKWISE_SIGNAL)
{
top--;
}
stack->points[top++] = rpoints[i];
}
stack->size = top;
free(rpoints);
}
bool OnSegment(const Point_t p1, const Point_t p2, const Point_t p0)
{
if(((p0.x > p1.x) && (p0.x < p2.x)) || ((p0.x < p1.x) && (p0.x > p2.x)))
return true;
else if(((p0.y > p1.y) && (p0.y < p2.y)) || ((p0.y < p1.y) && (p0.y > p2.y)))
return true;
else
return false;
}
/*
* Judge whether tow lines intersect
*/
bool JudgeIntersection(const Line_t first, const Line_t second)
{
bool ret = false;
int32_t start2first = JudgeDirection(first.start, first.end, second.start);
int32_t end2first = JudgeDirection(first.start, first.end, second.end);
int32_t start2second = JudgeDirection(second.start, second.end, first.start);
int32_t end2second = JudgeDirection(second.start, second.end, first.end);
if(((start2first > 0 && end2first < 0) || (start2first < 0 && end2first > 0)) &&
((start2second > 0 && end2second < 0) || (start2second < 0 && end2second > 0)))
ret = true;
else if((start2first == 0) && OnSegment(first.start, first.end, second.start))
ret = true;
else if((end2first == 0) && OnSegment(first.start, first.end, second.end))
ret = true;
else if((start2second == 0) && OnSegment(second.start, second.end, first.start))
ret = true;
else if((end2second == 0) && OnSegment(second.start, second.end, first.end))
ret = true;
return ret;
}
Point_t* ReadPointsData(const char* file_name, int32_t* size)
{
int i, count;
Point_t* points;
freopen(file_name, "r", stdin);
scanf("%d", size);
count = *size;
points = (Point_t*)malloc(count * sizeof(Point_t));
for(i = 0; i < count; ++i)
{
scanf("%lf %lf", &(points[i].x), &(points[i].y));
}
return points;
}
int main()
{
int32_t start;
Point_t first, second;
Line_t line1, line2;
first.x = 1;
first.y = 1;
second.x = 3;
second.y = 3;
line1.start = first;
line1.end = second;
first.x = 1;
first.y = 0;
second.x = 1;
second.y = 6;
line2.start = first;
line2.end = second;
fprintf(stdout, "Line Ans: %d\n", JudgeIntersection(line1, line2));
//Test Graham scan algorithm
int size;
Point_t* points;
PointArray_t stack;
const char* file_name = "points.data";
points = ReadPointsData(file_name, &size);
stack.size = 0;
stack.points = (Point_t*)malloc(size*sizeof(Point_t));
start = clock();
GrahamScan(points, size, &stack);
fprintf(stdout, "Time : %.3lf\n", double(clock()-start)/CLOCKS_PER_SEC);
fprintf(stdout, "Convex Hull Size: %d \n", stack.size);
free(points);
free(stack.points);
return 0;
}
|
STACK_EDU
|
One more thought on recording economic experiments using blockchain: privacy & Zero Knowledge Proofs
In a previous post, I discussed the advantages that blockchain can offer to the conduction of economic experiments. In particular, recording experimental data on an immutable ledger can circumvent the concern that researchers will try to fake data or omit results from selected sessions.
In my review there of why a blockchain-based registry for economic experiment is yet to be found, I neglected one aspect that is more general and not unique to economic experiments: privacy concerns.
At the risk of sounding too pessimistic, I think it would be safe to argue that researchers are generally reluctant to voluntarily share their data publically, on an immutable ledger, in real-time. This has several reasons. First, the implicit competition in the profession gives an incentive not to disclose your data until you are done utilizing it, based on the fear that others will publish something using the same data before you. Second, there may be rent-seeking going on (e.g. researchers who exploit their “market power” as the sole owner of the data to achieve credit in other projects by demanding to by being named as a co-author in exchange for sharing their data). Third, there may be reputation concerns (e.g. one does not want to reveal potential mistakes that might have occurred during data collection). Forth, data protection rules (e.g. the GDPR in Europe) may prohibit the recording of personal data, so that researchers may be legally prohibited from recording the data publicly. Fifth, subjects may refuse to participate in experiments if the data is recoded publicly.
This raises an obvious question: can one find a solution that will allow the data to be recorded on-chain without revealing the content of the data? A potential remedy can be found in the concept of “zero knowledge proofs” (ZKP): an interesting cryptographic concept that allows parties to prove to each other that they possess a piece of knowledge without revealing what that knowledge is. Getting a good grasp of ZKP can be quite challenging, as it is usually explained using (somewhat over-simplified) examples that may or may not include a magical cave.
ZKP have already been used in conjunction with blockchain projects, e.g. in the crypto-currency Zcash) and seem to be a promising venue for tackling problems such as the aforementioned one.
ZKP work roughly as follows:
· One entity – a “prover” – wants to prove something to another entity – a “verifier” – but without revealing the content of that something.
· To do this, the verifier poses a series of questions that can be easily solved using an answer sheet.
· A prover who has access to the answer sheet will always answer correctly, but how can the verifier be sure that the prover didn’t just guess? If there are enough questions, a person who falsely claims to hold the answer sheet will eventually guess wrong. Thus, enough questions should minimize the risk of lucky guesses to a negligible amount. ZKP thus rely on statistical certainty.
· Note that the prover never actually shows the verifier his answer sheet – and the verifier does not learn what is in that sheet. The verifier only concludes that the prover does have the answer sheet.
· Thus, the prover can prove he holds an answer sheet while providing the verifier with “zero knowledge” about the answer sheet itself.
Applying this to the context of recording economic experiments might look something like this:
- Researchers who submit a paper for publication will be asked to prove that the data that is analyzed in the paper is identical to the data in the original file created in the experiment itself.
- To achieve this, the data file in each session is “hashed” – it is transformed into a unique identifier that cannot be reversed-engineered. This hash will be recorded on-chain – not the data itself.
- Journal submission systems will be designed such that the researcher needs to prove that the file he is holding is the one whose hash is identical to the one registered on the chain.
Let me illustrate this process with an example. Suppose that in the (not so far) future, a hypothetical researcher, “Dr. Hasher”, will want to run an experiment about the effect of religion on risk aversion. Hasher is afraid that subjects will refuse to participate, because the data will include information on religious beliefs. However, in this example, journals will not publish papers unless they receive proof that the data wasn’t manipulated.
Hasher proposes as follows: at the end of each experimental session, the excel data files will be hashed so that “session1.xls” is recorded as a long string of numbers and letters (e.g. “JG23r0!kGsgdsjgj”), and “sessio2.xls” will be recorded as another string. In the immutable ledger, one could only see the hash and the time stamp – never the data file. Upon submission, the researcher will submit the file to a secure system that hashes the file and transmits the hash to the journal (i.e. the journal still does not have access to the file itself). The journal verifies that the hash is the same as the one on-chain, and can be confident that the data is original.
For the sake of brevity, I do not wish to get into the disadvantages of this solution too deeply (one problem might be that if the researcher forgets his login details, the data will be lost). What seems important here is to raise the question: can we improve on current practices using blockchain. The answer to this is, at the very least, maybe.
|
OPCFW_CODE
|
Maximum rate of change filter
The Maximum rate of change option will allow you to set a value which limits how much the variable can change over one second. As all electronic equipment is not perfect, occasionally there will be errors in the readings in the logged data. For instance, an RPM counter could spike while it is resting. To counteract this kind of event in the data, you can set a maximum rate of change. If you're slowly increasing the revs from 1000 to 5000 and you get a spike reading at 20000 for a split second, applying a maximum rate of change of 3000RPM would cause the software to ignore this spike and process the rest of the data without it.
A low-pass filter is a filter that passes low frequency signals but attenuates (reduces the amplitude of) signals with frequencies higher than the cutoff frequency. The actual amount of attenuation for each frequency varies from filter to filter. It is sometimes called a high-cut filter, or treble cut filter when used in audio applications.
The concept of a low-pass filter exists in many different forms, including electronic circuits (like a hiss filter used in audio), digital algorithms for smoothing sets of data, acoustic barriers, blurring of images, and so on. Low-pass filters play the same role in signal processing that moving averages do in some other fields, such as finance; both tools provide a smoother form of a signal which removes the short-term oscillations, leaving only the long-term trend.
A high-pass filter is a filter that passes high frequencies well, but attenuates (or reduces) frequencies lower than the cutoff frequency. The actual amount of attenuation for each frequency varies from filter to filter. It is sometimes called a low-cut filter; the terms bass-cut filter or rumble filter are also used in audio applications. A high-pass filter is the opposite of a low-pass filter, and a bandpass filter is a combination of a high-pass and a low-pass.
It is useful as a filter to block any unwanted low frequency components of a complex signal while passing the higher frequencies. Of course, the meanings of 'low' and 'high' frequencies are relative to the cutoff frequency chosen by the filter designer.
The frequency response of the Butterworth filter is maximally flat (has no
ripples) in the passband, and rolls off towards zero in the stopband. When viewed on a logarithmic Bode plot, the response slopes off linearly towards negative infinity. For a first-order filter, the response rolls off at -6 dB per octave (-20 dB per decade) (All first-order filters, regardless of name, have the same normalized frequency response). For a second-order Butterworth filter, the response decreases at -12 dB per octave, a third-order at -18 dB, and so on. Butterworth filters have a monotonically changing magnitude function with ?. The Butterworth is the only filter that maintains this same shape for higher orders (but with a steeper decline in the stopband) whereas other varieties of filters (Bessel, Chebyshev, elliptic) have different shapes at higher orders.
Compared with a Chebyshev Type I/Type II filter or an elliptic filter, the Butterworth filter has a slower roll-off, and thus will require a higher order to implement a particular stopband specification. However, Butterworth filter will have a more linear phase response in the passband than the Chebyshev Type I/Type II and elliptic filters.
Chebyshev filters are analog or digital filters having a steeper roll-off and more passband ripple than Butterworth filters. Chebyshev filters have the property that they minimize the error between the idealized filter characteristic and the actual over the range of the filter, but with ripples in the passband. This type of filter is named in honor of Pafnuty Chebyshev because their mathematical characteristics are derived from Chebyshev polynomials.
Because of the passband ripple inherent in Chebyshev filters, filters which have a smoother response in the passband but a more irregular response in the stopband are preferred for some applications.
|
OPCFW_CODE
|
Flea Shampoo For Cats Woolworths
Flea Shampoo For Cats Woolworths - Cat Meme Stock Pictures and Photos
Check out exelpet grooming flea control shampoo 250ml at woolworths.com.au.
Flea shampoo for cats woolworths. John paul oatmeal waterless foam shampoo. This is a helpful preventative for future infestations so products that do this are useful. Rufus coco flea flee dog treatment shampoo 200ml woolworths skout s honor probiotic honeysuckle pet shampoo conditioner 16 oz bottle chewy com.
Sergeant’s gold flea & tick cat shampoo; This ingredient acts as an insecticide, killing adult fleas and ticks on contact. Sentry purrscriptions flea and tick shampoo for cats, 12 oz
Kin+kind flea & tick dog & cat shampoo; Exelpet ezy dose heartwormer & intestinal allwormer for dogs 4 pack. Adams flea and tick shampoo make the first position in this top 7 list as it is the best flea shampoo for cats available at such a reasonable price.
Sentry purrscriptions flea and tick shampoo for cats kills fleas, ticks and lice. Exelpet fleaban contains 1.8 g/l. Best moisturizing dry shampoo for cats:
6 other really good flea shampoos for cats. Exelpet intestional allwormer treatment for cats 2 pack. Tropiclean natural flea and tick shampoo ensure the best flea bath for cats as its combination of essential oils helps to effectively tackle fleas, lice, eggs, and larvae.
John paul pet oatmeal shampoo; Shop online for woolworths great range of cat grooming, accessories & toys. Adams plus flea spray for cats & dogs.
- Yellow Cat Restaurant South Bend Indiana
- Zeniquin For Cats Ear Infections
- Zoonotic Diseases Cats And Dogs
- Wood Pellets Cat Litter Near Me
- Zylkene For Cats 75mg
- Zeniquin For Cats Side Effects
- Wood Pellets Cat Litter Smell
- Woonsocket Cat Sanctuary Photos
- Zero Carb Dry Cat Food
- Wood Pellets Cat Litter Tractor Supply
- Wood Pellet Cat Litter Shortage
- World S Best Cat Litter Low Tracking
- Zylkene For Cats Canada
- Zylkene For Cats Long Term Use
- Zylkene For Cats Petbarn
- Yellow Cat Vomit Reddit
- Wood Pellet Cat Litter Wilko
- Zeniquin For Cats Upper Respiratory
- Yellow Liquid Puke Cat
- Xanax For Cat Grooming
|
OPCFW_CODE
|
If you know a little French, think of "nous" and "vous". (My favorite spelling mistake is to write "me" instead of "mi".
ikr it's been like 5 times I got something wrong because I thought "ni" was "you"
In Mandarin, "你(ni)“ means "you (informal)" as well. I can't type actual pinyin diacritic thingies here, but entering "ni" on a pinyin keyboard will return "你"... along with "泥" which means "mud." I don't confuse "ni" with mud, but I do think "you..." Eh...
Sounds like a very silly sentence to me. I interpreted it as 'We are going to dance', but that was marked incorrect.
Ni iras - We dance. Ni iras -We are dancing.
"We are going to dance" would be some kind of future tense.
"Go" and "dance" are two seperate actions here. Although this is not tht the correct translation, think of it and like this: Ni iras kaj dancas - We go and then we dance.
Hope that helps. :)
It is a "silly" sentence indeed, which makes as much/little sense in Esperanto as in English. "Ni iras danci" = We are going [somewhere] to dance, or "Ni dancos" = We will dance / We are going to dance, or "Ni marŝas kaj dancas" = We walk and dance would make more sense. But these words / forms have not been taught yet. :)
Forget about English that you know already and read the sentence again "we are going to dance". Did you get it? We are what? We are going. Huh? Going to where? Going to... dance? Huh? Now, English is silly.
Why not "we go to dance"? Is this a proper Esperantan hendiadys or is it just taken straight from English? Or does it mean that we go and we also, quite separately from going, dance (and not "we go to dance")?
It is always pronounced as t͡s https://en.wikipedia.org/wiki/Voiceless_alveolar_affricate#Voiceless_alveolar_sibilant_affricate
Do you know if the "ts" pronunciation is similar to the Japanese "tsu" without the "u" sound? That technical wikipedia entry doesn't help me that much XD
According to my beginner knowledge of Japanese, something like that. You can listen to some phrases and make your own opinion :) For me, as a russian native speaker, so the sound t͡s is quite natural ;)
I wouldn't be sure how to produnce /t͡s/as opposed to /ts/ but surely, if you just said it as /dantsas/ as opposed to /dant͡sas/ (for the time being) you'd be understood?
Is there any difference between "simple present tense" and "present continuous tense" in Esperanto?
There is an explicit present continuous tense (e.g. "mi estas dancata"), but it only used if really necessary (i.e., rarely). Wherever possible Esperanto uses the simple past, present, and future tenses (-is, -as, -os). So usually dancas is used for both "I dance" and "I am dancing".
I've been meaning to get around to this, but what is the equivelant of "Ya'll" or "Vosotros" in Esperanto?
I've heard that there used to be "ci", though there's little documentation on its usage.
Google translate only gives me "Vi ĉiuj", which means "You all".
However, I feel that this is clumsy. As a Texan, I am used to having the contracted word "Y'all". I am also used to the proper Spanish word "Vosotros" which works better than Central/Southern American "Ustedes"
I think having a proper word for 2nd person plural is important. Some linguists say it's unnecessary because "You all" could be used. Though, couldn't the same be said for "Me all"?
(I'm assuming that there was a definite choice to avoid having both sing. and plur. forms of the 2nd person pronoun (you), so as to avoid the development that a number of European languages have, where the plural "you" gets used as a polite, formal "you" for singular, too.)
"Ci" is usually only used in translations where the original had words like "vosotros" in contrast with the singular. Vi is normally used otherwise. (as much as I got from that Wikipedia article... By the way, there is an Esperanto Wikipedia)
|
OPCFW_CODE
|
In the Matrix is there any indication in films or novelization that time travel is possible?
It seems to me that even in the Matrix, paradoxes would still occur that might prevent time travel or that other aspects of the Matrix that are there to faithfully duplicate "the real world" might. Has time travel been discussed?
If there's one thing The Matrix needs, it's more complicated universe mechanics.
future-works might apply here since there is a new film dropping that...as Paul states...
@NKCampbell - Future works doesn't apply here since OP is asking about the films, etc that have been released thus far.
There is, as yet, no hint that time travel exists within the universe of The Matrix (the four films), nor is it mentioned or used in any of the Animatrix stories, web comics or supplementary artbooks or scriptbooks.
Additionally, if anyone (human or machine) had ready access to time travel technology, there's no good reason for them not to have used it as a weapon, for example, to prevent Smith's takeover of the Matrix or the rise of the "singular consciousness" that caused the downfall of humanity.
Without knowing how the time travel would even work in The Matrix universe, I don't know if it's safe to say "there's no good reason" for them not to have used it as a weapon. If it was written as a causal time loop with no alternate timelines or anything; I'm sure you could come up with a lot of time travel plots that explain why it was never able to be used as a weapon (e.g. everyone who attempted it actually caused The Matrix in the first place). You can come up with a lot of "good reasons" when you add time travel in the mix. The first paragraph seems like more than enough reason.
@JMac - There's zero indication it exists. If it existed, one or both side would have tried to use it. Now you need to invent additional reasons why they didn't use it
My point was more than when you introduce time travel, you can make some really convoluted plots. There's nothing really stopping someone from making a prequel story about how they did use time travel; but because of a series of events all that it was able to do was actually put the events in motion that led to The Matrix, for example. Like you say, there's no indication that it exists; but because it's a fictional story, if the people in charge wanted to, they could always add it in and still leave it self-consistent with the existing plots.
So my point is that although you say "there's no good reason not to use it", a time travel plot could easily incorporate them already using the technology; and come up with it's own reasons why we don't notice it in any of the other Matrix media. It would almost definitely be terrible writing; but it could be done.
Regarding your side comment, I assumed the question was asking about time travel within the Matrix, which would be under the control of the machines and not weaponizable. (except perhaps against Smith and other free agents)
@Z.Cochrane - Ah well, now the machines use a mixture of memory control and "deja vu" small changes to control the surroundings. Whether that qualifies as 'time travel' is sorta kinda debatable.
If a human created a concept of time travel within the matrix then it would not be real, nothing invented or created in the matrix is real, the matrix decided if something is allowed to happen. Therefore there is no possible way a human could invent a time machine in the matrix that would actually work. The matrix could present the idea of time travel by creating a new vr environment that looks like the past and then make the human think that they had returned to the future. But more likely, the Matrix just makes the time machine not work, and probably blows it up to stop future work.
It depends how you define time travel in the matrix universe.
The Earth in reality is in the future but the Matrix always remains in the 20th Century so from the point of view of those locked in the matrix if they escape they have traveled to the future.
Within the Matrix anything is possible, so someone could "create a Time machine" which in reality just allows them to exist in there own matrix that looks like the past.
Outside of the Matrix, it is unlikely that Humanity has either the capacity or the technology to investigate and make Time Travel work, they are struggling just to survive. In addition Humanity is periodically wiped out by the machines so it is unlikely the amount of knowledge would be gathered to allow a breakthrough like Time travel.
In terms of the Machines, it seems unlikely they would have a desire to investigate the idea of time travel, there would be little to gain for them.
Now the real question, what happens if the someone in the matrix is able to create there own matrix. What if the whole of the Matrix (machines and Humans) is a VR inside a VR?
|
STACK_EXCHANGE
|
What is the origin of the phrase "Life is too short to ..."?
As the title says, what is the origin of the phrase "Life is too short to ..." used with things conventionally expected, but which you do not want to do?
Example: Life is too short to find a pair of matching socks.
Undoubtedly someone mumbled that a few thousand years ago while in the throes of a mead-induced hangover. Probably some Stonehenge salaryman on his way out the cave door to galumph after a Spotted Dick Puddingosaurus.
@Downvoter Please explain why the downvote?
I'm not the downvoter, but there are a few possibilities. There seem to be some users here who take pleasure in downvoting for no reason at all: maybe they don't like your avatar or user name. They never explain. Others will downvote because they don't think your question is interesting enough for them. They usually don't explain. Still others will see no evidence of your research on the topic before asking. They frequently explain and identify themselves, but not always. There are yet others who think such questions are silly. They never explain or own up. Don't ask, don't tell.
Never mind, just curious to know. I don't care much about the reputation. Thanks for the nice explanation. :)
The OED’s earliest citation illustrating life is too short is from the seventeenth century, and in 1686 ‘For life's too short for Pleasure’ appeared. The earliest citation using the words life is too short to is from 1741: ‘Life is too short, and Time is too precious, to read every new Book quite over in order to find that it is not worth the reading.’ One of its more memorable incarnations is in Thomas Love Peacock’s ‘Life is too short to learn German’.
Google Books has the phrase used by Jonathan Swift in 1711 (published in 1765):
Being convinced by certain ominous prognostics that my life is too short to permit me the honour of ever dining another Saturday with Sir Simon Harcourt...
It would appear the OED may need to revise its citations.
Careless reading on my part. Answer now amended to show the earliest citation dated 1741. This postdates the composition of Swift’s sentence, but predates its publication, and publication seems to be what the OED goes by. We can, of course, usually assume that a word or expression was in use before the OED's earliest citation. The dictionary has strict, some might say idiosyncratic, criteria for inclusion.
So many people say: "Life is too short!" Personally I get so TIRED of people using that phrase! They use it so often as some type of "filler" when it comes to relationships. NONE of us know how long our life is going to be. For some of us it may be short while for others it will be long. The question before us is a exestional question:
"What are you doing in the PRESENT MOMENT to reach out to others in discovering ways to show/demonstrate genuine AGAPE LOVE?" Too often we are more concerned as to what is life going to give me today to fulfill MY needs--rather than what can I DO to enable others to experience an enriched moment that gives them hope that life is given to us by God to enjoy His mighty blessings. WE ARE God's LANGUAGE to help enable life being EXCITING for all the people we encounter EACH day!
Rev. H Norman Campbell
Retired United Methodist minister
Baton Rouge, Louisiana
This is interesting but I'm not sure how it answers the explicit question by the OP (the origin of the phrase)
Rev. Campbell, a great place to express your personal opinions about a question is the [chat]. The Answer area is specifically for giving an authoritative answer to the original question.
|
STACK_EXCHANGE
|
Replacing backslash '\' in python
When trying to replace '\' in python, the data changed and give me unknown letters.
i have tried string.replace, re.sub, regex_replace
a = '70\123456'
b = '70\123\456'
a = a.replace('\\','-')
b = b.replace('\\','-')
Expected Result:
a = '70-123456'
b = '70-123-456'
But The Actual Result is:
a = 70S456
b = 70SĮ
What is the problem and how to solve it?
I suggest you print a and b before calling replace
Try print('70\123456'). That string never contained a backslash to begin with…
Try to print(a, b) right after you define them. Do any of them have a backslash?
You should quote backslash not only in replace function.
See what \ooo means in a string literal: https://docs.python.org/3/reference/lexical_analysis.html#index-22
In python strings, backslashes followed by numbers mean (sometimes) special characters, if you want to represent backslashes, you should double them (e.g: "\") or use r""
They don't need to be special, e.g. "\110\145\154\154\157\054\040\127\157\162\154\144\041" is valid but not canonical. repr can frequently help.
thank your for your explanations!
That's because \123 and \456 are special characters(octal).
Try this:
a = r'70\123456'
b = r'70\123\456'
a = a.replace('\\','-')
b = b.replace('\\','-')
print(a)
print(b)
i i want to make a script to concatenate 'r' with '70\123456'.
from pyspark.sql.functions import concat_ws
n = concat_ws('r','70\123456')
print(n)
result: Column<b'concat_ws(r, 70S456)'>
@CharbelKeedy I didn't understand what you mean.
my fields are coming as 70\123456 not as r'70\123456'. How to Handle it?
@CharbelKeedy what do you mean by fields are coming like this? If that's your literal wherever it comes from python handles it and parses it automatically to 70\\123456 from 70\123456. It escapes the slash
|
STACK_EXCHANGE
|
/**
* Extrait l'heure d'un objet Date
* @param {Date} uneHeure Objet Date
* @returns "hh:mm"
*/
function formatageHeure(uneHeure){ //de type Date()
var h = uneHeure.getHours();
if(h<10){h=0 + "" + h};
var m = uneHeure.getMinutes();
if(m<10){m=0 + "" + m};
return h + ":" + m;
}
/**
* Effectue une action en fonction de l'element cliqué par l'utilisateur
* @param {HTMLElement} element Element cliqué
*/
function actionOnClick(element){
if(!document.querySelector('#popUpReservation')){
var dataRes = element.target.className.split(' ');
var popUp = initPopUp(element);
if(dataRes.indexOf("cell")>=0){
popUpNewRes(popUp);
}else{
popUpEditRes(popUp);
}
}else{
destroyNode(document.querySelector('#popUpReservation'));
}
}
/**
* Construit un popUp à la position de l'element cliqué. Il hérite de son ID.
* @param {HTMLElement} element element cliqué
* @returns un popUp pret à l'emploi (HTMLElement DIV)
*/
function initPopUp(element){
if(document.querySelector('#popUpReservation')){
var popUp = document.querySelector('#popUpReservation');
destroyNode(popUp);
}
var popUp = document.createElement("div");
document.querySelector("body").insertBefore(popUp,document.querySelector("#menu"));
popUp.id = "popUpReservation";
popUp.className = element.target.className;
popUp.style.position = "absolute";
popUp.style.backgroundColor = "rgba(85, 128, 185, 0.712)";
popUp.style.borderRadius = "3px";
popUp.style.padding = "3px";
popUp.style.width = "145px";
popUp.style.height = "195px";
popUp.style.zIndex = "1";
//console.log(element);
popUp.style.top = (element.target.getBoundingClientRect().top + window.scrollY) + "px";
popUp.style.left = (element.target.getBoundingClientRect().left + window.scrollX) + "px";
return popUp;
}
/**
* Initialise un popUp de reservation de salle
* @param {HTMLElement} popUp un popUp
*/
function popUpNewRes(popUp){
var hRes = popUp.classList[1].split(":");
var hResMin = [];
if(hRes[1]==="00"){
hResMin[0] = parseInt(hRes[0]);
hResMin[1] = 30;
}else{
hResMin[0] = parseInt(hRes[0])+1;
hResMin[1] = 0;
}
var form = [
{label:" ", input:initInput("hidden","type",[{name:"value", value:"add"}])},
{label:" ", input:initInput("hidden","date",[{name:"value", value:popUp.classList[2]}])},
{label:" ", input:initInput("hidden","reservation_debutH",[{name:"value", value:popUp.classList[1]}])},
{label:"Fin", input:[initInput("number","reservation_finH",[{name:"min", value:hResMin[0]},{name:"max", value:22},{name:"value", value:hResMin[0]}]),initInput("number","reservation_finM",[{name:"step", value:30},{name:"value", value:hResMin[1]},{name:"max", value:59},{name:"min", value:0}])]},
{label:"Places", input:initInput("number","salle_places",[{name:"value", value:1},{name:"min", value:1}])},
{label:"Informatisée", input:initInput("select","salle_informatise",["OUI","NON"])},
{label:" ", input:initInput("submit","btnOk",[{name:"value", value:"OK"}])}
]
var formRes = initForm("formReservation",1,form);
popUp.appendChild(formRes);
formRes.addEventListener("submit", function (event) {
event.preventDefault();
sendData(formRes,"./index.php?action=editerReservation");
});
}
/**
* Détruit un element HTML et ses enfants
* @param {HTMLElement} node element à détruire
*/
function destroyNode(node){
node.remove();
}
/**
* retourne true si l'elementI est "à coté" de l'elementII
* @param {HTMLElement} elementI
* @param {HTMLElement} elementII
* @returns bool
*/
function isNextTo(elementI,elementII){
var posEleI = elementI.getBoundingClientRect();
var posEleII = elementII.getBoundingClientRect();
var res = false;
if(!((posEleII.top>=posEleI.bottom) || (posEleII.bottom<=posEleI.top))){
res = true;
//console.log(elementI,elementII);
}
return res;
}
/**
* Mise en place des reservations
*/
function formatMultiRes(){
var firstLine = document.querySelectorAll('[class="08:00"]')[0];
firstLine.childNodes.forEach(element =>{
if(element.childNodes.length>1){
var sizeParent = element.offsetWidth;
for(var i=0;i<=element.childNodes.length-1;i++){
var tb = [element.childNodes[i]];
for(var j=0;j<=element.childNodes.length-1;j++){
//console.log(element.childNodes[i],element.childNodes[j]);
if(i!=j){
if(isNextTo(element.childNodes[i],element.childNodes[j])){
if(tb.indexOf(element.childNodes[j])==-1){
tb.push(element.childNodes[j]);
}
}
}
}
element.childNodes[i].style.width = (100/tb.length)+"%";
for(k=0;k<tb.length;k++){
element.childNodes[k].style.left = ((sizeParent/tb.length)*k)+"px";
element.childNodes[k].style.backgroundColor = "rgb(207, "+(40+40*k%255)+", 77)";
if(tb.length>2){
element.childNodes[k].style.writingMode = "vertical-lr";
}
}
}
}
// element.addEventListener("click",function(e){
// actionOnClick(e);
// });
});
}
/**
* Construit un formulaire à partir d'informations sous forme de tableau.
* (voir:function popUpNewRes())
* @param {string} name nom du formulaire (name + id)
* @param {Int} nbCol nombre de colonne
* @param {Array} elements description du formulaire sous forme de tableau
* @returns un formulaire (HTMLElement)
*/
function initForm(name,nbCol,elements){
var form = document.createElement("form");
form.id = name;
var tb = document.createElement("table");
form.appendChild(tb);
elements.forEach(function(e){
if(nbCol===1){
var ligI = document.createElement("tr");
tb.appendChild(ligI);
ligI.innerHTML = e["label"];
var lig = document.createElement("tr");
tb.appendChild(lig);
if(Array.isArray(e["input"])){
e["input"].forEach(function(e){
lig.appendChild(e);
});
}else{
lig.appendChild(e["input"]);
}
}else{
var lig = document.createElement("tr");
tb.appendChild(lig);
var cellLabel = document.createElement("td");
lig.appendChild(cellLabel);
cellLabel.innerHTML = e["label"];
var cellInput = document.createElement("td");
lig.appendChild(cellInput);
if(Array.isArray(e["input"])){
e["input"].forEach(function(e){
cellInput.appendChild(e);
});
}else{
cellInput.appendChild(e["input"]);
}
}
})
return form;
}
/**
* Construit un element HTML avec les attributs renseigné [input,select,button]
* @param {string} type [input,select,button]
* @param {string} name nom et id de l'element
* @param {array} option=null attributs de l'element
*/
function initInput(type,name,option=null){
switch(type){
case "select":
var input = document.createElement(type);
input.name = name;
input.id = name;
option.forEach(function(e){
var opt = document.createElement("option");
opt.value = e;
opt.innerHTML = e;
input.appendChild(opt);
});
break;
case "button":
var input = document.createElement(type);
input.name = name;
input.id = name;
if(option){
option.forEach(function(e){
if(e["name"]==="innerHTML"){
input.innerHTML = e["value"];
}else{
input.setAttribute(e["name"], e["value"]);
}
});
}
break;
default:
var input = document.createElement("input");
input.type = type;
input.name = name;
input.id = name;
if(option){
option.forEach(function(e){
input.setAttribute(e["name"], e["value"]);
});
}
break;
}
return input;
}
/**
* Envoi des données au serveur
* @param {HTMLElement} form données à envoyer
* @param {string} url url du serveur
*/
function sendData(form,url) {
var XHR = new XMLHttpRequest();
// Liez l'objet FormData et l'élément form
var FD = new FormData(form);
// Définissez ce qui se passe si la soumission s'est opérée avec succès
XHR.addEventListener("load", function(event) {
//alert(event.target.responseText);
});
// Definissez ce qui se passe en cas d'erreur
XHR.addEventListener("error", function(event) {
//alert('Oups! Quelque chose s\'est mal passé.');
});
// Configurez la requête
XHR.open("POST", url);
// Les données envoyées sont ce que l'utilisateur a mis dans le formulaire
XHR.send(FD);
}
|
STACK_EDU
|
I was wondering, in which folder can I add a txt file to the backtrack 2 iso? I tried adding it in the /bt folder but it caused a boot error....
But then I won't be able to access my txt file from Backtrack right?
Should I create the folder under /bt or in the main iso folder that contains /boot and /bt? Could I put my text file in the 'optional' folder under /bt ? I can't seem to get it to work
You could try to make a dir with the txt-file(s) you want to add and convert it with dir2lzm to a lzm-file which you place in the modules- or rootcopy-folder of the cd. Not tested, but should work. (You should find the file(s) then in the root-folder)
btw: The files in the optional-folder will be ignored, except when you use the "load="-cheatcode. See slax.org/cheatcodes.php
the best place for a single file is the rootcopy directory..........
Watch your back, your packetz will belong to me soon... xD
BackTrack : Giving Machine Guns to Monkeys since 2006
this is so frustrating... I tried burning the iso with aircrack-ptw, by using both methods outlined in another thread suggested by baxter and shamanvirtuel, i.e.
1) Baxter - by replacing root.lzm with an updated root.lzm file and adding aircrack-ptw to the same directory. (/bt/base)
2) Shamanvirtuel - I tried putting the aircrack-ptw.lzm file directly in /bt/modules
and both didn't work... maybe I interpreted the directions wrongly?
I also tried adding the file aircrack-ptw and aircrack-ptw.tar.tgz to the /bt/rootcopy folder and that didn't work either.
As for the text file that I'm trying to include, I put that in the /bt/rootcopy folder and that didn't work as well... please help me! I'm burning at 4x using imgburn and I've burned a workable copy before so there shouldn't be anything wrong with my bt2final.iso file. I'm creating a new iso everytime I add files to it... what am I doing wrong...
When I try to boot the CD, I get text scrolling saying that there's a problem with the boot, 'sorry..' , something along those lines...
I keep getting scrolling text which says "Image Checksum error. Sorry...". This time I tried putting the new aircrack suite.lzm in the /bt/modules folder and the same thing happened... do I have a corrupted file?
|
OPCFW_CODE
|
By rituchhibber on Sep 17, 2012
Chris Baker SVP
Oracle Worldwide ISV-OEM-Java Sales
Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills.
Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA?
Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience. We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success.
How do you believe this strategy is helping the ISVs to work hand in hand with Oracle to ensure that end customers get the industry-leading solutions that they need?
We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme.
How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly?
One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can talk confidently about implementation being the same every time they do it. Very large ISV applications could take a year or two to be implemented at an on-premise environment once. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry.
What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships?
My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'ExSite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at firstname.lastname@example.org or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle.
What opportunities are immediately opened to new ISV partners joining OPN?
As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation.
Finally, are there any other messages that you would like to share with the Oracle ISV community?
The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them.
|
OPCFW_CODE
|
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
|Scott Sellers, CEO, President, and Co-Founder|
Gil Tene, CTO, Co-Founder
Azul Systems, Inc., a privately held company, develops runtimes (JDKs, JVMs) for executing Java-based applications. Founded in March 2002, Azul Systems is headquartered in Sunnyvale, California, with offices in London, United Kingdom; Saint Petersburg and Novosibirsk, Russia and Bangalore, India.
Azul produces Zing, a Java Virtual Machine (JVM) and runtime platform for Java applications.
Zing is compliant with the associated Java SE version standards. It is based on the same HotSpot JVM and JDK code base used by the Oracle and OpenJDK JDKs, with enhancements relating to Garbage Collection, JIT Compilation, and Warmup behaviors, all aimed at producing improved application execution metrics and performance indicators.
Key feature area touted by Zing include:
- C4 (Continuously Concurrent Compacting Collector): A Garbage collector reported to maintain concurrent, disruption-free application execution across wide ranges of heap sizes and allocation rates [from sub-GB to multi-TB, from MBs/sec to tens of GB/sec]
- Falcon: An LLVM-based JIT compiler that delivers dynamically and heavily optimized application code at runtime
- ReadyNow: A feature aimed at improving application startup and warmup behaviors, reducing the amount of slowness experienced by Java applications as they get started or restarted
Zing first became generally available on October 19, 2010. The company was formerly known for its Vega Java Compute Appliances, specialized hardware designed to use compute resources available to Java applications. Zing utilized and improved on the software technology initially developed for the Vega hardware. The product has been regularly updated and refreshed since.
Zulu and Zulu Embedded JVM
Azul distributes and supports Zulu and Zulu Enterprise, a certified binary build of OpenJDK. The initial release in September 2013 supported Java 7 and 6 and ran on Windows 2008 R2 and 2012 on the Windows Azure Cloud. On January 21, 2014, Azul announced Zulu support for multiple Linux versions as well as Zulu Enterprise, which has subscription support options. Support for Java 8 was added in April 2014 and Mac OS X support was added in June 2014. In September 2014, Zulu was extended to support Docker. Zulu Embedded, which allows developers to customize the build footprint, was released in March, 2015.
Developed for manufacturers in the embedded, mobile and Internet of Things (IoT) markets, each Zulu Embedded build is verified by Azul using the Java Community Technology Compatibility Kit (TCK) and incorporates the latest OpenJDK bug fixes and security patches.
Azul produces the jHiccup open source performance measurement tool for Java applications. It is designed to measure the stalls or "hiccups" caused by an application's underlying Java platform.
Azul Systems was founded by Scott Sellers (now President & CEO), Gil Tene (CTO), and Shyam Pillalamarri (VP of Engineering).
Initially founded as a hardware appliance company, Azul's Java Compute Appliances (JCAs) were designed to massively scale up the usable computing resources available to Java applications. A proxy Java Virtual Machine (JVM) installed on the existing system will transparently redeploy Java applications to the Azul appliance. The first compute appliances, offered in April 2005, were the Vega 1 based models 960, 1920 and 3840, consisting of 96, 192 and 384 processor cores, respectively. The latest appliance versions, based on the Vega 3 platform, contained up to 864 processor cores and 768 GB of memory.
With the introduction of Zing in 2010 , the company transitioned to producing software-only solutions, later adding Zulu (2013 ) and Zulu Embedded (2015 ). It retired its hardware appliance Vega product lines in 2013.
Stephen DeWitt previously held the position of CEO.
Based on public filings, Azul has raised more than $200M in financing to date.
Major investors include Accel Partners, Austin Ventures, Credit Suisse, Meritech Capital Partners, Redpoint Ventures, Velocity Interactive Group, and Worldview Technology Partners. ComVentures and JVax Investment Group have also invested in Azul.
- "Company Locations". Azul Systems.
- http://stuff-gil-says.blogspot.com/2017/05/zing-hits-trifecta.html"Zing hits the trifecta"blog entry
- Tene, Gil; Iyengar, Balaji; Wolf, Michael (2011). "C4: the continuously concurrent compacting collector" (PDF). ISMM '11: Proceedings of the international symposium on Memory management. doi:10.1145/1993478. ISBN 9781450302630.
- https://llvm.org/devmtg/2017-10/slides/Reames-FalconKeynote.pdf "Falcon: An optimizing Java JIT" The 11th meeting of LLVM developers and users Keynote
- http://www.drdobbs.com/jvm/azul-readynow-blasts-heat-on-java-warm-u/240166623 "Azul ReadyNow! Blasts Heat On Java "Warm-Up" Problem" Dr. Dobbs article
- https://appdevelopermagazine.com/new-readynow-from-azul-systems-solves-the-java-warmup-problem/ App Developer Magazine article: "New ReadyNow From Azul Systems Solves The Java Warmup Problem"
- https://www.zdnet.com/article/azul-zing-moving-its-jvm-from-silicon-to-software/ "Azul Zing: moving its JVM from silicon to software" ZDNet article
- Ryan Slobojan (December 30, 2010). "Azul Puts the Zing in Java". Retrieved March 15, 2018.
- https://docs.azul.com/zing/ZVMRelNotes.htm "Zing Virtual Machine Release Notes"
- https://www.azul.com/products/zing/zing-specs/Zing Specifications
- (http://www.infoworld.com/t/java-programming/microsoft-azul-put-open-source-java-azure-cloud-223377)InfoWorld Archived October 21, 2013, at the Wayback Machine: Microsoft, Azul to put open source Java on Azure cloud, July 24, 2013
- Azul Systems press releases http://www.azulsystems.com/press/azul-systems-launches-zulu-enterprise-a-commercialized-fully-supported-version-of-openjdk and http://www.azulsystems.com/press/azul-systems-extends-zulu-to-support-java-6-and-major-linux-distributions
- Azul Systems press releases http://www.azulsystems.com/press-2014/azul-systems-extends-zulu-runtime-for-java-to-support-java-8 and http://www.azulsystems.com/press-2014/azul-systems-extends-zulu-runtime-for-java-to-support-mac-os-x
- InfoWorld, "Run anywhere again: Java hooks up with Docker" http://www.infoworld.com/article/2687074/java/run-anywhere-java-docker.html
- Electronics Weekly article http://www.electronicsweekly.com/news/design/embedded-systems/java-based-platforms-certified-iot-2015-03/
- (http://www.infoq.com/news/2011/12/jHiccup) Azul Releases Open Source jHiccup Tool to Provide Response Time Analysis of the Java Runtime
- Azul takes wraps off Java compute appliance Archived January 21, 2008, at the Wayback Machine - NetworkWorld.com, April 18, 2005.
- https://sss.cs.purdue.edu/projects/azul/ Purdue University's S3Lab use of Vega 3 hardware platform for transactional memory abstraction research
- https://www.azul.com/press_release/azul-systems-extends-leadership-in-business-critical-java-applications-performance-with-the-new-vega-series/ Azul Systems Vega 3 announcement, 2008
- https://www.infoq.com/news/2013/10/azul-zulu/ "Azul Systems Releases Zulu, an OpenJDK Build for Windows Azure At JavaOne" InfoQ article
- https://www.infoq.com/news/2015/03/zulu-embedded/ "Azul Announces Zulu Embedded, based on OpenJDK" InfoQ article
- DeWitt, Stephen (2003). "Commission of Corporations, State of California, Notice of Transaction Pursuant to Corporations Code 25102(f)" (PDF). San Francisco: California Department of Corporations. Cite journal requires
|journal=(help)[permanent dead link]
- California Department of Business Oversight Database
- "Azul Systems Investors". Azul Systems.
- "Azul Financing Article". San Jose Biz Journal.
- Azul Systems - Official website
- Priming Java for Speed - Azul CTO Gil Tene's presentation from QCon SF 2014 (video)
- Understanding Java Garbage Collection - Azul CTO Gil Tene's presentation from SpringOne 2GX 2013 (video)
- C4 white paper - White paper from the ACM conference describing the C4 (Continuously Concurrent Compacting Collector) garbage collection algorithm. Authors: Gil Tene, Balaji Iyengar and Michael Wolf, all of Azul Systems
- Enabling Java in Latency-Sensitive Environments - Video of Azul CTO Gil Tene's presentation from QCon New York 2013
|
OPCFW_CODE
|
Component specifications needed for a high voltage front-end
I have an ADC here that in the datasheet has this example application circuit for a 100 - 240 VAC front end voltage measurement.
If I was going to build this circuit, what ratings should my parts have in order to survive the high voltages?
Here is the list of the valuse and links to parts that I have chosen I think would work:
Rv
Rhi = 300k Ohms
Rlo = 750 Ohms
Rfilt = 49.9 Ohms
Cfilt = 1000pF Ohms
Putting aside the value and tolerances, what package should I choose for the resistor? This I think can be answered by calculating the maximum power (as higher power requires a bigger package), assuming the highest voltage we normally expect is 350V and a 900k-750 ohm voltage divider would only have 0.3mA of current going through it so its power rating can be as low as 0.1W (0.0003mAx350v =0.105W).
Can my resistor really just be a 603 package 0.1W like the one I linked?
Are those tiny packages really up to the task of high voltage mains?
It is unlikely that you will find resistor packages rated for 240VAC - just read the datasheet.. Even if you do, you still have issues such as pad distance and so on.
The resistors have a voltage rating as well. The 350V max can be divided down to 116V each for the Rhi. Add some margin, and that is the package size to choose. For the one you linked to, that'd be a 1206 package.
@WesleyLee I do acknowledge PCB placement as another problem, but that will be a problem for another day. For now im first trying to gather the appropriate parts.
@Aaron Oh so that is the reason why there 3 in series, I was wondering why there are 3 in series and not just 1 900k ohm resistor. But wont the first resistor recieve the full 350v?
I didn't mean the placement of the components but the actual distance between pads of a 0603 footprint.
@Jakequin You need to learn about current and voltage and Ohm's Law and voltage dividers. No, the first resistor only "sees" one third of the 350V.
Using 0603 resistors for Rhi is probably not adequate. For mains voltage you probably want 1210 or 2512 sizes. Rlo, Rflt, Cflt will see much less voltage, so 0603 is probably adequate for those.
To determine the what package you need for a particular voltage you need to at least look at the following.
Manufacturer Rating. You need to consider what the manufacturer datasheet for a particular part series says is the max operating voltage. For example, page 1 of the Vishay CRCWe3 resistor datasheet gives the following voltage ratings for different package sizes.
https://www.vishay.com/docs/20035/dcrcwe3.pdf
0402 40V
0603 75V
0805 150V
1206 200V
2010 400V
2512 500V
Certification Standards. You need to consider what IPC or UL standards are you trying to meet regarding creepage and clearance on the PCB. For that you need to look at the distance between the pads in the manufacturer suggested footprint (which is also in the datasheet). For example, page 9 of the Vishay CRCWe3 resistor datasheet gives the following clearances (dimension G) for different package sizes.
0402 0.45 mm, IPC2221B Table 6-1, up to 30V
0603 0.75 mm, IPC2221B Table 6-1, up to 150V
0805 1.00 mm, IPC2221B Table 6-1, up to 150V
1206 1.50 mm, IPC2221B Table 6-1, up to 300V
2010 1.70 mm, IPC2221B Table 6-1, up to 300V
2512 4.75 mm, IPC2221B Table 6-1, up to 500V
Consult the relevant standards for full details on spacing requirements.
You may also try online tools such as...
http://www.creepage.com/
Pulsed voltage rating.
a. The pulsed voltage rating is usually given in the datasheet as a graph of peak voltage vs time.
b. The pulsed rating should exceed what you expect to see following your front end transient voltage suppressor.
c. For example, page 5 of the CRCWe3 datasheet shows that the 2512 package can withstand a pulse of 1600V for 1ms or less.
d. Build in lots of margin. Its worth the extra $0.25 to prevent someone from being injured or killed when a fault occurs.
SEARCH TOOLS
Some distributors such as Mouser let you select voltage rating as a criteria when searching for resistors.
https://www.mouser.com/Passive-Components/Resistors/SMD-Resistors-Chip-Resistors/Thick-Film-Resistors/_/N-7h7yz
Thank you very much, this answer is very comprehensive. I will follow your advice and build to highest spec i can go as i have the pcb space and its not that much more expensive. I will try to use only SMD components as to lessen the area of high voltage to one side of the board only making the other side safer.
Quick question, Assuming that i use that part or any part that is rated to (500v) can i now simplify the 3 series Rhi resistors into 1 900k ?
@Jakequin You can use as single 500V part as long as you don't have any certification requirements that prevent it. For example, if a certification agency requires safety even in the event of a single faulty component then you may be required to use two resistors. But sometimes (even in safety critical applications) the agencies allow you to treat resistors as infallible and not subject to that rule, so it depends.
Rv is NTC, use Through Hole NTC, With a voltage rating of 400V.
Use Rhi SMD 2512 1W for safety reasons.
You can Use Rfilt and Clift 0603 as voltage has dropped to a safe level there.
Does Rv has to be through hole ? i would like to keep the bottom section of the pcb free from anything
In that case use SMD NTC with a 400V rating, you will find it easily on a local or online store.
Yes i did find a lot, i was just curious if through hole has some niche advantage over SMD
|
STACK_EXCHANGE
|
|« Perls of Visdom||My settings, everywhere »|
Wed, Feb 15, 2012
If you work on a large codebase, you'll probably find yourself coming across code where you think "Who wrote this? When? Why?" and wishing you knew more about what lead to what you're looking at being written.
If you use a version control system, you can probably find out. If you're using git, it's easy: You use git blame, a helpful function that tells you the commit for every single line of code in a file.
You then look at that commit, and between the commit message and looking at all the code that was written as part of it, you stand a better chance of working out the "Who/why/when" problem.
But that's a bit tedious. So since we all use vim at work, I came up with something nicer: Press a shortcut key, and vim will show you the commit responsible for your current line. The whole process from above, instantly available in your text editor.
It was surprisingly simple, too. All it needs is:
In your .vimrc:
" Get the commit responsible for the current line
nmap <f4> :call BlameCurrentLine()<cr>
" Get the current line number & file name, view the git commit that inserted it
let lnum = line(".")
let file = @%
exec "!gitBlameFromLineNo " lnum file
This maps the functionality to the F4 function key. Feel free to pick another.. All it does is get your current line & file, and pass it to a command which does the git stuff.
You could do the rest with a simple bash one-liner, but for easier maintainability, I stuck it into a little Perl script in /usr/local/bin/:
my $debug = 0;
my $line_no = $ARGV;
my $file_name = $ARGV;
say "Line: $line_no | File: $file_name" if $debug;
# Get the git blame for the line & file
my $line = `git blame -L $line_no,$line_no $file_name`;
say "Line: $line" if $debug;
# Reduce this just to the SHA
(my $sha = $line) =~ s/^(\S+).*/$1/;
say "SHA: $sha" if $debug;
# Show the commit for that SHA
system("git show $sha");
Yeah, it's a bit over-engineered for such a simple task. But I can foresee having to meddle with it quite a bit in future.
But it's really quite cool, being able to press a button in your editor and get everything you might need to know about the line you're working on, just like that.
So here it is, for anyone else who might want it :)
|<< <||> >>|
|
OPCFW_CODE
|
import {
Account,
Connection,
PublicKey,
SystemProgram,
sendAndConfirmTransaction,
LAMPORTS_PER_SOL
} from "@solana/web3.js";
import bs58 from "bs58";
const ENCODED_PAYER_KEY = process.env.ENCODED_PAYER_KEY;
const AIRDROP_AMOUNT = 100 * LAMPORTS_PER_SOL;
export default class Faucet {
private checkBalanceCounter = 0;
constructor(
private connection: Connection,
private feeAccount: Account,
public airdropEnabled: boolean
) {}
static async init(connection: Connection): Promise<Faucet> {
let feeAccount = new Account(),
airdropEnabled = true;
if (ENCODED_PAYER_KEY) {
feeAccount = new Account(bs58.decode(ENCODED_PAYER_KEY));
airdropEnabled = false;
} else {
await connection.requestAirdrop(feeAccount.publicKey, AIRDROP_AMOUNT);
}
const faucet = new Faucet(connection, feeAccount, airdropEnabled);
await faucet.checkBalance();
return faucet;
}
async checkBalance(): Promise<void> {
this.checkBalanceCounter++;
if (this.checkBalanceCounter % 50 != 1) {
return;
}
try {
const balance = await this.connection.getBalance(
this.feeAccount.publicKey
);
console.log(`Faucet balance: ${balance}`);
if (this.airdropEnabled && balance <= LAMPORTS_PER_SOL) {
await this.connection.requestAirdrop(
this.feeAccount.publicKey,
AIRDROP_AMOUNT
);
}
} catch (err) {
console.error("failed to check faucet balance", err);
}
}
async fundAccount(
accountPubkey: PublicKey,
fundAmount: number
): Promise<void> {
await sendAndConfirmTransaction(
this.connection,
SystemProgram.transfer({
fromPubkey: this.feeAccount.publicKey,
toPubkey: accountPubkey,
lamports: fundAmount
}),
[this.feeAccount],
1
);
this.checkBalance();
}
async createProgramDataAccount(
programDataAccount: Account,
fundAmount: number,
programId: PublicKey,
space: number
): Promise<void> {
await sendAndConfirmTransaction(
this.connection,
SystemProgram.createAccount({
fromPubkey: this.feeAccount.publicKey,
newAccountPubkey: programDataAccount.publicKey,
lamports: fundAmount,
space,
programId
}),
[this.feeAccount, programDataAccount],
1
);
this.checkBalance();
}
}
|
STACK_EDU
|
import urllib.parse
from typing import List, Optional
from typing_extensions import Protocol
import selenium.webdriver
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.support.ui import WebDriverWait
DEFAULT_TIMEOUT = 30
# Easier to maintain here rather than in a stub file
class SeleniumDriver(Protocol):
def get(self, url: str) -> None: ...
def quit(self) -> None: ...
def refresh(self) -> None: ...
def find_element_by_tag_name(self, name: str) -> 'SeleniumElement': ...
def find_element_by_id(self, id: str) -> 'SeleniumElement': ...
def get_attribute(self, name: str) -> str: ...
class SeleniumElement(Protocol):
def click(self) -> None: ...
def send_keys(self, text: str) -> None: ...
def clear(self) -> None: ...
@property
def text(self) -> str: ...
@property
def tag_name(self) -> str: ...
def find_element_by_tag_name(self, name: str) -> 'SeleniumElement': ...
def find_element_by_id(self, id: str) -> 'SeleniumElement': ...
def get_attribute(self, name: str) -> str: ...
def get_property(self, name: str) -> str: ...
class TextToChange():
def __init__(self, browser: 'Browser', id: str) -> None:
self.browser = browser
self.element_id = id
element = self.browser.get_element(id=id)
self.old_text = element.text
def __call__(self, driver: SeleniumDriver) -> bool:
element = self.browser.get_element(id=self.element_id)
current_text = element.text
return current_text != self.old_text
class Browser:
""" A nice facade on top of selenium stuff """
def __init__(self, *, headless: bool = False) -> None:
self.base_url = "http://127.0.0.1:3000"
options = ChromeOptions()
options.headless = headless
if headless:
options.add_argument("--no-sandbox")
options.add_argument("--disable-extensions")
options.add_argument("--disable-translate")
# note: does not seem to work on tiling Window Managers
options.add_argument("window-size=1200x600")
self.driver = selenium.webdriver.Chrome(options=options)
self.get("/")
def find_element(self, **kwargs: str) -> Optional[SeleniumElement]:
assert len(kwargs) == 1
res = self.find_elements(**kwargs)
if not res:
return None
return res[0]
def get_element(self, **kwargs: str) -> SeleniumElement:
assert len(kwargs) == 1
res = self.find_elements(**kwargs)
assert len(res) == 1
return res[0]
def find_elements(self, **kwargs: str) -> List[SeleniumElement]:
assert len(kwargs) == 1
name, value = list(kwargs.items())[0]
name = name.rstrip("_")
func_name = "find_elements_by_" + name
func = getattr(self.driver, func_name)
res: List[SeleniumElement] = func(value)
return res
def wait_for_element_presence(self, *, id: str, timeout: int = DEFAULT_TIMEOUT) -> SeleniumElement:
driver_wait = WebDriverWait(self.driver, timeout)
def element_is_present(driver: SeleniumDriver) -> bool:
res = self.find_element(id=id)
print(f"waiting for {id} to appear")
return res is not None
driver_wait.until(element_is_present)
return self.get_element(id=id)
def wait_for_any_element(self, *, ids: List[str], timeout: int = DEFAULT_TIMEOUT) -> None:
driver_wait = WebDriverWait(self.driver, timeout)
def elements_are_present(driver: SeleniumDriver) -> bool:
print(f"waiting for {ids} to appear")
found = [self.find_element(id=id) for id in ids]
return any(found)
driver_wait.until(elements_are_present)
def wait_for_element_absence(self, *, id: str, timeout: int = DEFAULT_TIMEOUT) -> SeleniumElement:
driver_wait = WebDriverWait(self.driver, timeout)
def element_is_absent(driver: SeleniumDriver) -> bool:
res = self.find_element(id=id)
print(f"waiting for {id} to disappear")
return res is None
res: SeleniumElement = driver_wait.until(element_is_absent)
return res
def wait_for_button_enabled(self, *, id: str) -> SeleniumElement:
button_id = id
element = self.find_element(id=button_id)
assert element
assert element.tag_name == "button"
driver_wait = WebDriverWait(self.driver, timeout=DEFAULT_TIMEOUT)
def button_is_enabled(driver: SeleniumDriver) -> bool:
element = driver.find_element_by_id(button_id)
print(f"waiting for {id} to be enabled")
return not element.get_attribute("disabled")
driver_wait.until(button_is_enabled)
return element
def wait_for_text_change(self, *, id: str, timeout: int = DEFAULT_TIMEOUT) -> SeleniumElement:
driver_wait = WebDriverWait(self.driver, timeout)
res: SeleniumElement = driver_wait.until(TextToChange(self, id=id))
return res
def _to_full_url(self, segment: str) -> str:
return urllib.parse.urljoin(self.base_url, segment)
def get(self, segment: str) -> None:
full_url = self._to_full_url(segment)
self.driver.get(full_url)
def refresh(self) -> None:
self.driver.refresh()
def delete_cookies(self) -> None:
self.driver.delete_all_cookies()
def close(self) -> None:
self.driver.quit()
|
STACK_EDU
|
computers men: i want to make a computer do a thing that humans do
me: maybe you should ask the humans who do that thing how they do it
computers men: preposterous. absurd
@dankwraith That very task is a thought problem I run over and over in my head. It's a fascinating problem.
@dankwraith Capitalist: we will make the machines do things that humans are already good at and make the humans do things that the machines are already good at.
Me: why not let humans do human things and have machines do machine things? It would be easier and more efficient.
Me: let each play to their strengths and cooperate instead of working at cross purposes.
Capitalist: I’m calling security.
@dankwraith I'm trying to fix it, all right? give me a break here
Wasn't that the whole basis of expert systems? You run into the same issue as anthropology, or any other field that rely on asking humans questions. There are three truths, the truth they tell you, the truth they believe, and the actual truth.
What a bleak future! I think as a society we have to rethink the image of software systems being impartial. Bias is bad, but I'd argue unrecognized systemic implicit bias might be worse.
@alexjgriffith @dankwraith I'm not sure why "observing behavior" or "sampling correctly" seems like a less daunting bias elimination task to you than interpreting and evaluating answers. behavior is also nontrivially interpreted by the observer. 'data crunching' is not more objective— the computer just does what we tell it. it is just possibly economical in scale- it allows your questionable interpretive choices to play out over larger sets of questionable data more quickly
As a trivial example. If I'm creating a system to enter the letter a, and I ask you how many times you enter the letter a in an average hour of work, you're not going to give me an accurate answer, wheather you want to or not.
The classic example of the limits of expert systems was the development of flight assist programs. Pilots where unwilling / unable to accurately answer questions concerning their flight activities.
@dankwraith i desperately want to print this out and show it to my dad so he can laugh about it but then i would have to explain mastodon to him, and i am too much of a coward for that unfortunately
monads.online is a community for goth nerds, aka people who are interested in the intersections of math, art, programming, philosophy, and related topics. this does not include your techbro ass. we also enjoy a healthy amount of shitposting. if you are a techno-materialist, technocrat, or some flavor of capitalist, don't even bother applying. if you are interested in an account please fill out an application, detailing why you are interested in joining, what you have to bring to the community, and your prior, if any, accounts on the fediverse.
|
OPCFW_CODE
|
Why did Bobby Fischer quit chess after the match with Spassky?
I am not asking about the 1975 "resignation" to Karpov, but for which reasons did Bobby Fischer gradually lose interest in competitive chess after becoming World Champion? Jews or not Jews, he did not play a single tournament or match game until the 1992 so-called "re-match".
There is no real definite answer for this question. Many people have floated different theories, most of which borrow bits and pieces from each other. It is quite possible that his mental struggles just got the best of him or that he lost interest after attaining the summit of the game. Psychologically, it must have been hard to cope when you get the thing you've been working for most of your life. In My Great Predecessors, Volume IV, Garry Kasparov offers a few reasons, arguing that Fischer probably would have lost to Karpov and probably knew it (that point is debatable).
Regardless, Fischer forfeiting the title was not without precedent in his career. He dropped out or quit major competitions or chess itself on more than one occasion. In fact, despite his spell of dominance in the candidate's and title match, his career had almost always been erratic. The reasons for this are debatable as well--I've seen people argue that it was mental illness, strategy, or just intimidation--but the answer is probably a mix of reasons.
Did anyone also consider that after the match he found himself with a relatively big amount of money, at least for those times, after struggling all his youth to live a decent life as a chess professional? I think that this factor is rarely mentioned nowadays. It might be more important that it seems, I think
That definitely seems like it could be a contributing factor although a) he might have been doing ok for a while with sponsorship, b) the elements that lead to his break were already in place.
why no mention of fischer random/fischerandom/fischerrandom/chess960/chess959?
@BCLC I am not aware of him actually competing in chess960 tournaments or matches, so while he did release some rules, he wasn't actively playing chess after the match. Also, the question was really about the time period between the two Spassky matches.
i'm fairly certain there were no chess960 tournaments at the time, but i was wondering about how at least part of the quitting chess was the openings and stuff that eventually led fischer to create chess960. so like did you perhaps think this was not necessarily relevant? or what?
@BCLC It didn't think it was particularly relevant to the question, which is about his interest in competitive chess.
... Old RJF on chess. Why Fischer hated chess. Who's the best ever - are you sure?
He had mental problems. But he is still one of the greatest chess players of all time.
Joseph Ponterotto has even written a book about Fischer's mental problems - A Psychobiography of Bobby Fischer
Ponterotto believes the evidence is strongest for paranoid personality
disorder, a psychiatric condition characterized by unrelenting
paranoia and suspicion of others, but is not schizophrenia.
Not quite sure why somebody gave you a down vote because you are absolutely right. Fischer had serious mental problems. Even after smashing up some of the other candidates 6-0 he only played the world championship match because the British businessman Jim Slater gave $50,000 to make up the prize money to an acceptable level for Fischer. After Fischer's disgusting behaviour the Soviets would have been well within their rights to claim the match by default but they (and Spassky) very much wanted the match to go ahead and made concessions.
Some elaboration on what kind of mental problems (and maybe some references) would greatly improve this answer.
same as morphy... mental illness paranoia . wikipedia mental state. Also dr .Joseph Ponterotto had a very good article about bobby https://legacy.fordham.edu/campus_resources/enewsroom/inside_fordham/may_7_2012/in_focus_faculty_and/decrypting_bobby_fis_82599.asp
Asperger's syndrome is frequently mentioned in regard to Fischer nowadays. Of course these are, and will remain, speculations.
why no mention of fischer random/fischerandom/fischerrandom/chess960/chess959?
Fischer declared in several media channels that his goal, all of his life, had been to become World Chess Champion. Once he had accomplished that, he concluded that playing chess competitively was no longer a challenge, and that if he did not have something significant to gain, there was no longer any reason for him to play.
I think he also may have just lost interest in chess. He did invent fisherRandom chess later after his resignation to karpov, saying that normal chess had become really bland for him. This and his mental issues combined led to him quitting. I think it's a shame.
I don't know if he lost interest in chess but it's a fact that he did not play any rated game after the match. Fischer Random is a product of the 1990s, I am talking about the mid-1970s Fischer.
good answer of trailrunnersquared and also good response from @A.N.Other i have a response of my own: see this interview: Interviewer : Did you gradually started to hate chess or it becomes suddenly
Bobby : Did I gradually ?? I think it came gradually but then at certain point I was hating it but I didn't know it because I was still trying to make it work. chess.com and...
... Old RJF on chess. Why Fischer hated chess. Who's the best ever - not sure if here specifically but in the complete interview on this flight, fischer says the gradual thing. even if chess960/959 was born in the 90s, it was conceived latest 70s
You have to understand the time. There was a cold war going on and chess was very important to the Soviets who saw chess as proof that their far-left system of government was better. The Soviets would spare no expense to win at chess and used innumerable armies of analysts to analyze every single move Fischer made from the time he was 15.
Fischer was smart enough to realize this and realized he was facing a Kobayashi Maru of sorts. (ie an unwinnable game). There is no way one man can out-analyze an entire country full of thousands of players analyzing his games.
So, Fischer realized the only solution was to take a break so he could analyze his own lines without other's doing the same. That's what he did and he came back the strongest player in history.
So, after beating Spassky Fischer was faced with a choice. He could play Karpov, who he was undoubtedly better than, and eventually lose the title when the soviet analysts caught up with him.
Or, he could retain the title and ensure that that the Soviets never won the "real" title by beating the proven best player in the world.
The part B of that was that Soviet champions would often dictate unfavorable conditions to challengers. By Fischer standing firm to his conditions, he created an environment where he demonstrated their hypocrisy which paved the way for future non-soviets.
Bottom line, Fischer was playing two chess games. One at the board and the other metaphorical geopolitical chess game. By not playing, he won both. Few people may realize the subtlety of his genius and that is what makes him the GOAT.
Fischer was immature and had mental problems as well as difficulty relating to people in normal ways.
He was extremely arrogant and thought he could dictate what he thought was proper for WC prizes and terms of match conditions.
When they refused to meet his terms and conditions he just quit.
In some ways he was brilliant but in others his EQ was very very low.
why no mention of fischer random/fischerandom/fischerrandom/chess960/chess959?
|
STACK_EXCHANGE
|
Disclaimer: I have no insight into what the exam questions are or what the exam will be like. I am studying for this exam based on the public information available from Microsoft. I have no idea if my study plan will be enough to pass the exam, time will tell. Please use my study guide and reference material as guidance and build your own study plan to suit your needs.
As I explained in my previous blog, I am looking to study for the Microsoft Security, Compliance, and Identity Fundamentals exam and share my journey. Today is another day of studying and making sure I cover off the objectives the exam might ask me.
Microsoft Compliance Solutions
Today I am tackling the section of the exam on "Describe the capabilities of Microsoft Compliance Solutions". This section of the exam takes up 25-30% of the exam, so it's an important section to understand.
This section covers off some of the governance and information protection capabilities of Microsoft 365, so this is a section I am definately not 100% up to speed with. I used to use Microsoft 365 a lot, either migrating customers to it or helping them with the adoption plans. However, over the last few years I've not really used it. My own email address is hosted in a Microsoft 365 tenant so I dabble every now and again, but I'm not using it every day.
Again turning to Microsoft Learn there is a Learning path that covers off this section of the exam Describe the capabilities of Microsoft compliance solutions.
I didn't have a lot of time today to study but I have managed to cover off a couple of the modules in the learning path. But I have managed to learn some things that I wasn't sure of, so it's been a good session! 😁
The Microsoft Compliance Manager isn't something I'm familiar with but I've learnt it is an end to end solution in the Microsoft 365 compliance center that enables admins to manage and track compliance activities. Where as the compliance score, which is available through the Compliance Manager helps gives you an overall compliance posture overview.
The Compliance Manager can help you understand and increase your compliance score and help to stay within compliance requirements.
I'm not sure I could have told you the difference between the two before today's study session.
I also learnt a bit more about the Advanced Audit capability within Microsoft 365 and how it can help organisations conduct forensic and compliance investigations. The Advanced Audit keeps all Exchange, SharePoint, and Azure Active Directory audit records for one year, but I believe a longer retention capabilities is on the roadmap.
Sharing my Journey
Making progress everyday is what this journey is about, making sure I learn something and don't just go into the exam and guess the answers and hope for a pass. 😁 Let me know if you are studying for the exam and what resources you are finding useful!
|
OPCFW_CODE
|
One of the character designs we have discussed in our fledging saga talks about a character that has a kind of personal magic. Basically he is quite impervious to external influences due to aura, bith positive and negative. Basically he gets half the benefit or penalty out of an aura, rounding the benefit/penalty down.
We think this will me a minor V&F. However, we are unsure if that should be a be a benefit (virtue) or a penalty (flaw). Any thoughts on that? Thx
Virtue. I'd trade half benefit from magical aura for only half penalty of divine auras any time.
Hm, then again, your lab total will be reduced, so that's a rather harsh penalty. Still, I think it's more fitting as a Virtue.
Virtue - it reduces dependencies upon outside circumstances. Out of interest, will it also halve botch dice from auras?
Both a minor virtue and a minor flaw
(like cyclical magic)
Virtue then. TThe V AND F approach is anothe rpossibility. I willc omment it to see what they think about this. Thx
Didn't think about the botch dice. To prevent it being too good, I would say "no", since the aura is still there even if it does nto affect the character as much.
This was my thought ass well.
We have this:
Independent Magic, remove bonuses and penalties due to type and strenght of prevailing aura and replaces with a +3 bonus
as a Major Virtue. And this:
Independent Magic, halve all bonuses and penalties due to type and strenght of prevailing aura or Taint and gives a +1 bonus
as a Minor Virtue.
The value of this one depends on the saga.
In a saga where most goings-on go on in Dominion or Infernal Auras, this is a fantastic virtue. In a saga that features expeditions to mythic places with high magical auras, and whose covenant lies in one of them, this is a grievous flaw.
If I were to price them generically, I'd have a minor virtue that halves all Aura effects on the character and his works, rounding toward zero, and a major virtue that eliminates all Aura effects. This is fair; characters in a covenant with Magic 8 won't take these virtues, so there's no problem, whereas those at the Vatican covenant will want them even without an additional bonus.
If it removes/reduces the extra Botch dice for auras, it is definately a Virtue. Mind you, we also use the full aura as extra Botch dice even in the aura you are aligned to. I believe this was an object of discussion recently.
A Lab rat or researcher won't take it, but a social magus waltzing around the dominion could. Magi frequenting high powered faerie auras might, if it reduces Botch dice.
|
OPCFW_CODE
|
To run FLUENT in the background in a C-shell (csh) on a Linux/UNIX system, type a command of the following form at the system-level prompt:
fluent 2d -g < inputfile >& outputfile &
or in a Bourne/Korn-shell, type:
fluent 2d -g < inputfile > outputfile 2>&1 &
In these examples,
&tells the Linux/UNIX system to perform this task in background and to send all standard system errors (if any) to outputfile.
The file inputfile can be a journal file created in an earlier FLUENT session, or it can be a file that you have created using a text editor. In either case, the file must consist only of text interface commands (since the GUI is disabled during batch execution). A typical inputfile is shown below:
; Read case file rc example.cas ; Initialize the solution /solve/initialize/initialize-flow ; Calculate 50 iterations it 50 ; Write data file wd example50.dat ; Calculate another 50 iterations it 50 ; Write another data file wd example100.dat ; Exit FLUENT exit yes
This example file reads a case file example.cas, initializes the solution, and performs 100 iterations in two groups of 50, saving a new data file after each 50 iterations. The final line of the file terminates the session. Note that the example input file makes use of the standard aliases for reading and writing case and data files and for iterating. ( it is the alias for /solve/iterate, rc is the alias for /file/read-case, wd is the alias for /file/write-data, etc.) These predefined aliases allow you to execute commonly-used commands without entering the text menu in which they are found. In general, FLUENT assumes that input beginning with a / starts in the top-level text menu, so if you use any text commands for which aliases do not exist, you must be sure to type in the complete name of the command (e.g., /solve/initialize/initialize-flow). Note also that you can include comments in the file. As in the example above, comment lines must begin with a ; (semicolon).
An alternate strategy for submitting your batch run, as follows, has the advantage that the outputfile will contain a record of the commands in the inputfile. In this approach, you would submit the batch job in a C-shell using:
fluent 2d -g -i inputfile >& outputfile &
or in a Bourne/Korn-shell using:
fluent 2d -g -i inputfile > outputfile 2>&1 &
|
OPCFW_CODE
|
How to deal with a team that fails to complete a task?
I had been in a situation where details about the task and the target to be followed have been explained to the team. However, in spite of multiple follow-ups, they fail to complete the defined tasks in the given timeline.
They come up with reasons such as 'I was occupied with personal works', 'I was not well', etc.
However, at the end, I will be responsible to answer to the client for the missed deadline. How to deal with this situation with the client and convince him?
Welcome to pmse. Your title asks how to deal with the team, while your Question asks how to deal wit the client. You should change one of them to fit the other, to avoid your Question being closed as too broad.
There is a not a single answer to this questions. Its imperative to understand how a team should perform so that you can spot dysfunction and respond appropriately. You need to understand the dysfunction to address the problem.
One resource that comes to mind is the book "the 5 dysfunctions of a team".
While what you mention initially sounds like "avoidance of accountability" and "Lack of committment" this may come from a deadline that is ridiculous or other requirements that are overbearing but because there's an "absence of trust" the team can't talk about the issues and things end up derailing.
I suggest you refocus yourself as a leader of people (regardless of your title) which is much more than a manager (of anything, people, projects, deliverables, clients, etc). It is how well you are able to lead that affects the performance of the team and acceptance of the client.
Driving the team through control will fail and backfire as you'll invite the bottom four items in the pyramid. Servant Leadership will serve you better. (think agile, read about scrum master principles etc). Anything from John Maxwell on leadership is a good read.
Another resource I'd suggest is "Extreme Ownership - How Navy Seals Lead and Win". It highlights that given two teams - one high performing and one low performing - swapping leaders shows that performance follows the leader (chapter 2 - No Bad Teams, Only Bad Leaders).
You have indicated a few reasons why a task was behind schedule but, in reality, there are far more variables involved that affect schedule performance that a couple of personal reasons. Work is probabilistic and personal reasons, environmental reasons, external dependency reasons, aggressive target reasons, and then a ton of random or aleatory reasons will affect where you come in on schedule in both favorable and unfavorable ways. So, how to respond to the team and how to respond to your client really depends on the drivers of why you are late.
Your team will always have personal blockers that affect their individual performance and they will also have natural variability in their performance. Those things should have been considered when you created your schedule and your planned duration. Sometimes it goes your way, other times it doesn't. But if you planned well, you should have both favorable and unfavorable variances that net out to a reasonable over or under schedule.
And you have have a credible way to manage your schedule so you can unearth variances early and you can both mitigate and communicate them out early and often. Use critical path management, critical chain, earned schedule as a few alternatives to monitor. When you start seeing you're late, you inform your customer of the forecasted variance and your plan to mitigate. Sometimes mitigation fails and you need to educate your client of that possibility.
Otherwise, this is what PM is. It is managing risks and variances and communicating the same to all of your stakeholders.
Earned Value Management. I'd re characterize the problem; the problem isn't that your team isn't completing the task, the problem is that the project manager is unable to communicate to the sponsors the status of the project and is unable to identify or execute interventions to rescue the project.
First thing is to set intermediate milestones. If you measure task completion at task completion, you have already failed. There is no opportunity to intervene or rescue. Task completion needs to be measured at a time when intervention is still possible. If the task is "Deliver the documentation for project Foo", and it takes 2 weeks, but can be rushed for 1 week, then you need a milestone at 1 week. That milestone has to be defined - e.g. "At 1 week I expect an outline and rough draft; the second week will involve wordsmithing, quality checking and peer review." If you don't have that at the 1 week milestone you can add resources, drop scope or negotiate with the sponsors for an extension (perhaps explicitly invoke technical debt).
The project manager's primary responsibility is to at any instant report to management a well formed estimate of project completion. (I estimate that there is a 90% chance that the project will be delivered on January 15, 2019, and a 1% chance that it will be delayed beyond February 2nd). The only way to fulfill that responsibility is to understand risks to schedule and issues. People will get ill, they will have personal problems; that is a natural consequence of working with people rather than parts. That estimate of completion, and the opportunity to take action to increase confidence in that estimate relies on understanding the status of the underlying deliverables and work products.
"I had personal matters" ... "I was not well" ...
This project sucks and I was interviewing with another employer ...
As "eAndy" said, you must become "a leader of people." Everyone has 'personal matters,' and everyone gets sick now and then, but if they're giving you these 'reasons' for why the work isn't getting done – (a) these are merely excuses; and (b) they're serving them to you either because they don't feel that they can speak with you freely, or because they assume that you just don't care.
Any of the "5 dysfunctions of a team" could be at-play here; most likely all of them.
Personally, what I try to do is to get the team to some neutral off-site location, then present myself as someone who is both "part of your team" and "responsible to the folks upstairs, as in fact we all are," and try to clear the air. Then, with both my future talking and my entire body-language, try to get a breakthrough. Try to – in utter and complete confidence if possible – clear the air. If you can undo that very first bottled-up cork, the rest usually come in a rush. The most important thing that you can do is to listen, and to, with your body language, signal receptivity.
|
STACK_EXCHANGE
|
This isn't a great book, or a hugely intelligent or clever book, but it is at least wrong in interesting and provocative/demonstrative ways. The setting is the world in 2040, where technology promises/threatens the creation of post-humans who have mental and physical abilities well beyond the baseline. In particular, the book focuses on technologies and programs that allow the inspection of minds and direct communication between minds. The main story is a kind of action-techno-thriller as different parties try to shape how these technologies will be used and disseminated. As an action-thriller, it is decent and readable.
So, the problems. One is that I found myself skimming surprisingly large sections of the book. Some of the fight scenes became a bit boring and overdone, but the primary culprit was the frequent sections involved with LSD-like trips and cosmic oneness and light and unity. These weren't particularly moving; the author doesn't have the skill to convey religious experience in an interesting way.
This leads into the second problem, which is that while the author has taken the first step of agreeing that brains make minds, he hasn't really allowed the implications of that to seep into the rest of his world view. E.g. if he can see a lab animal that has a wire stuck in its brain to cause it to feel pure joy, and then he turns around and uncritically describes his own metaphysical experience of light and unity as a sort of ultimate and objective good, I think he has sort of failed to put 2 and 2 together.
In addition, in his description of brain techs, the author often falls into the kind of implicit Cartesian Dualism that Dennet complains about. If some nano-probes are in your head and mucking up your memories, you wouldn't feel tendrils in your mind, since there is not a homunculus in your head to observe the changes. To put it another way, you wouldn't feel the change anymore than if someone changed a webpage somewhere out on the web. When the change happened there would be nothing for you to notice, it's just that the next time you went to the webpage, the content would be different. Or at least that's my understanding of things. Similarly it doesn't make any sense to say that a character uses their will to resist these sorts of physical level changes. There are a number of related problems I could go on about, but I will leave it with the above two.
Hmm, what else can I complain about. The main protagonist is a moderately skilled programmer who develops software that runs on/affects brains. He uses his own mind as a test machine for this, i.e. he rolls out changes to his own mind without testing them anywhere else first. And he's writing in, like, low level C. As a programmer, I can't look on these practices with anything but horror. I also feel like I disagree with the author on his main ideological points. He seems to think that group consciousness would be a wonderful thing, while the last thing in the world I would want is to let other peoples' trash minds touch mine. More seriously, I think if you look at programming, where you lay out semi-thoughts in a semi-physical form, the first thing a programmer wants to do when they come to someone else's codebase is to rewrite and refactor everything into their own personal style. The other person(s) code seems terrible and smelly and alien and you want to make it right. I'm not saying this instinct to refactor is a good instinct, but it is definitely there and it is definitely common. I feel like if the author looks at his own personal experience with programmers, where we can't even deal well with this small shadow of another person's mind, I don't see how he can say that more direct experiences would be better. The author also thinks that these sorts of transformational technologies should be freely available to anyone, and tries to sell that at length in the book. My own opinion is that if we get to the point that anyone with access to a high school biology lab and an internet connection can wipe out large swathes of the human race, we'd be pretty fucked. I'm not a huge fan of behemothic surveillance states, but existential tech threats would be one of the few one valid justifications I could think of for them.
Ok, those were the complaints. Despite all of them, I didn't mind the book that much as it did trigger more thought than most novels. I wouldn't want to read a sequel, but one was good.
|
OPCFW_CODE
|
01-29-2015 07:55 AM
I have an Excel spreadsheet with multiple worksheets that I am reading into SAS using the XLSLIB libref. I am converting each worksheet into SAS format via a combination of proc sql and macro code. The problem is the program will fail if someone has the Excel file open on their machine. I only need to read from the file, not write to it. Is there an option I can include in my XLSLIB statement which will access the file as read-only? I tried 'access=readonly' but I still get the following error:
ERROR: Connect: The Microsoft Office Access database engine cannot open or write to the file ''. It
is already opened exclusively by another user, or you need permission to view and write its
01-29-2015 08:07 AM
I don't know what you mean by the "XLSLIB" statement -- do you mean a LIBNAME statement? Or are you using PROC IMPORT? Can you show ALL the code you've tried, not just the ERROR message.
01-29-2015 08:11 AM
Here is what I have:
libname XLSLIB "filepath\filename.xls" access=readonly mixed=yes stringdates=yes;
proc sql; create table a as select * from dictionary.tables where libname="XLSLIB" ; quit;
proc sql; select memname into :snamlist separated by '*' from a ; quit;
proc sql; select count(memname) into :n from a ; quit;
%put &snamlist; %put &n;
%macro except; %do i=1 %to &n; %let var=%scan(&snamlist,&i,*); %let sf=%substr(&var,1,%length(&var)-1);
retain SITE PATID PROTSEG ERRMSG ERRCODE KEY DOC REVIEW REVIEWDT COMMENT;
set xlslib."&var"n (sasdatefmt=(REVIEWDT='mmddyy10.') rename=(SITE=SITE1
*ensures all variables are formatted consistently;
drop site1 patid1 protseg1 errmsg1 errcode1 key1 doc1 review1 reviewdt1 comment1;
format site $5. patid $13. protseg $1. errmsg doc $200. comment $500. errcode 2. review $3. reviewdt mmddyy10. key $40.;
%end; %mend except;
01-29-2015 08:30 AM
libname to Excel will create a lock on the file. AS far as I am aware there is not method to open a document as a libname without keeping open as by its very nature requires access to the file. For instance, you goto look at one of the sheets, and someone else has deleted the file or moved the sheet, then your whole libname becomes corrupt. So yes, it wants exclusive access to the file whilst the libname is in place or whilst someone else has it open. Sure there is shared workbooks, but that only works in Office technology.
My suggestion, avoid Excel like the plague. Save your data to delimited text files, e.g. csv, then write a datastep to read in the csv and create a datastep. You will find that this way you have no "Excel" related issues (of which there are plenty), you will have full control over how the data is imported/output to dataset, plus there is no locking issue. Plus as a bonus, you data is portable across systems, open source so anything can read it.
Edit, to avoid the next question, its pretty simple to create a small VBA macro to save all the sheets out to CSV format: vba - Saving excel worksheet to CSV files with filename+worksheet name using VB - Stack Overflow
01-29-2015 09:37 AM
This works and serves my purpose. I know Excel can be particularly finicky when used in combo with SAS; however, this is my only option for now as I am using Excel to maintain a database which is used to facilitate dynamic reports in the rest of my SAS program.
01-29-2015 09:47 AM
Well, you have identified your issue there then: "I am using Excel to maintain a database which is used to facilitate dynamic reports".
I will replace my car engine with a hairdryer and kettle, then wonder why it doesn't work. What you want is: "I am using a database, which feeds into a custom reporting tool."
|
OPCFW_CODE
|
Community communication channel
Jangouts development has worked so far in a impulse-driven way. We get contributions and contributors in every Hack Week, pre-GSoC, GSoC itself... But even if we have had contributions from more than 10 people in one way or another, I feel we still lack a community common vision.
There are several topics I would like to discuss with everybody interested in Jangouts. The days of @imobachgs and myself deciding on the project direction in our small office should be over. So the first question is what should be the channel for discussing those topics.
Some projects simply use issues in Github for that kind of conversations. I'm not sure I like that.
Many other projects, including Janus Gateway, use Google Groups for open general discussion (even support), keeping Github issues just for more concrete bug reports and fate requests.
Last but not least, since openSUSE already acted as some kind of patron giving us one slot for GSoC, we could maybe give another step in that direction and ask openSUSE for a mailing list<EMAIL_ADDRESS>What alternative do you prefer? Any other option?
Any mailing list is fine for me. Even github issues is something I can live with, if it comes with some kind of road map, where we wanna go and what should be included and what not.
We could also look at https://www.fossil-scm.org/xfer/doc/trunk/www/foss-cklist.wiki or http://spot.livejournal.com/308370.html for points we might be lacking for good open-source projects.
Overall, I would prefer a ML on a neutral domain (e.g<EMAIL_ADDRESS>over a openSUSE one, to not give the impression, that it only works on openSUSE or should be driven mostly by openSUSE people.
+1 on avoiding opensuse.org, since I think this would deter some people outside the openSUSE community. I think Google Groups is a pretty good option - low maintenance, neutral, good for sane people who like mailing lists and also for totally crazy people who prefer web forums ;-)
Also +1 for keeping the github issue tracker. I like the precision that that can bring to even architectural / mission discussions. Mailing list threads can sometimes go off on tangents too easily (although that's fine as long as people remember to change the Subject header ...)
+1 for avoiding opensuse.org and using jangouts.org
I prefer jangouts.org instead feeding data trolls like google
@uhelp What's your suggestion for the technical part of jangouts.org? Buying the domain is the easy part. But if you don't want to feed data trolls I guess you are suggesting to host our own mailing list software on our own server. Or do you have any alternative in mind that is as straightforward as Groups from the setup/maintenance point of view?
Bullshit about opensuse.org. Show some pride! Would be a really
nice. Quite some projects are hosted by fedora or debian and noone
bothers. I'd avoid opensuse.org just for the antiquated mailinglist
software and archive there.
Google groups is nice to interface with non techies as it looks like
a web forum to them but is actually a mailinglist.
Yes, I'd prefer running all of it on a root server.
Some docker containers and things are easily moveable/upgradable
.
Jangouts should also run on different distris, so tutorials for other distris and setup howtos are needed as well. Some nginx container, a framework for CMS should do the trick.
And of course a demo site as well.
Hence pride may be the wrong term here.
Nothing prevents to credit openSUSE on the jangout site.
+1 to Google Groups, like @lnussel says is actually a mailing list, and a lot of project use it so non tech users and tech people that no contribute normally to open source projects are more accustomed to use it.
If we want to get more contribution to the project, IMHO this solution will be useful for a bigger range of people.
Just in case someone needs it, there is now http://jangouts.org/cgi-bin/mailman/listinfo/devel available.
And then there was the question about the overlap with jitsi which is also opensource videophony and seems to be a rather active and mature project. Did anyone try it? Can we do things they cannot?
I like the idea of using Google Groups. Mailing lists already take a big part of my day and I would not like to overload it. I never used Google Groups, but I saw it in some projects and the fact that it looks like a forum would make it easy to catch up when threads get very big, plus it's more organised from a UX point of view :)
Github issues are also a good option to me. So, either Google Groups or Github Issues, they both look like good ideas.
What is important to me is what @jreidinger mentioned in his comment: having and preparing some sort of road map with milestones, and how we could decide on where to go, voting on issues perhaps? or do we program/design upon what we think is best for the project?
@bmwiedemann Thanks for setting up a mailing list! I'm fine with using it (but I'm also not against Google Groups, although I'm not a big fan). I would use the ML for community communication and general discussion and Github issues for specific development code and bug reporting (they look like complementary to me).
My 2 cents.
PS: I agree that we urgently need some kind of vision/roadmap for the future.
About the rest of the infrastructure, currently we have a demo server at https://jangouts.tk/. Unfortunately we're missing proper documentation.
Regarding the Docker instances I'll be happy to share them, for example, in Docker Hub. But we need some manpower to create, maintain and document them. I'm quite interested in Docker and I could try to do it but if we have some expert onboard... :)
Ok. So looks like everybody likes the combination of
Github issues to track feature requests
a non-openSUSE mailing list
The only point that is not crystal clear from the discussion is whether that list should be Google Groups or a self-hosted mailman.
For me, the self hosted solution is only an option if we have at least two volunteers that can sign with blood that they will administer that service for at least one year. Otherwise, I prefer the almost-zero setup and zero maintenance solution (Google Groups).
If we don't have such volunteers, the decision is made. If they step up, I will create a poll to decide between both possibilities.
To speed up things, no need to wait for the volunteers to pop-up in order to start voting. So...
+1 to this comment (using the "add your reaction" button in Github) if you prefer Google Groups.
-1 to this comment if you prefer the self-hosted mailman (assuming we have volunteers to maintain the service).
@bmwiedemann @aspiers @lnussel @cyntss Any vote? (see previous comments)
I don't mind either way, so I'm abstaining.
Vote given 👍
So we have a winner. I will create a Google group today or tomorrow. Thanks all.
Done. Here is the group url https://groups.google.com/forum/#!forum/jangouts and here are the instructions to join https://support.google.com/groups/answer/1067205?hl=en
I just realized the good & old mailing-list style way of joining is not described in that document.
Just send a mail to<EMAIL_ADDRESS>It simply works.
|
GITHUB_ARCHIVE
|
RAM is pretty critical in a computer. There’s the age-old argument that you need a certain amount of RAM and no more. There’s also the debate about memory speeds, whether they actually matter or not. We’ll save those for another day. Long story short, if you’re skimping out on RAM, it will have some sort of impact on the performance. That impact obviously depends on how much capacity you get and what speed you get.
When shopping around for a memory kit, you’ll notice that most of them come in a pair of two. Installing these into the motherboard incorrectly might have a slight impact on the overall performance. A lot of motherboards also proudly portray dual channel or even quad channel support. What does this all mean?
We’ll be explaining all of that quickly in this brief guide. We’ll also be comparing single channel memory to dual channel memory and see if it makes a difference in real-world usage.
What are Memory Channels?
What is a memory channel? Let us quickly explain. RAM connects with a circuit on the motherboard known as the memory controller. A memory bus is what connects these two things together. You can think of the memory bus as a series of wires. The memory controller analyzes the type of data transferred between the processor and RAM. The memory bus handles the amount of data transferred. Depending on the number of bits transferred in the memory bus, we can find out if it’s a single channel or dual channel.
Suppose you are installing a single stick of RAM in the motherboard. Then it will be running on a single channel. This single channel configuration is usually of 64-bits. Think of it as a single “lane” for the RAM to transfer data. You’ll typically be running this configuration if you have a single stick of RAM. Bandwidth on single channel memory is lower than dual channel. But, that will also depend on the speed or frequency of the memory.
The name itself is pretty self-explanatory. In a dual channel configuration, the memory bus divides the data into two paths. Which means that both modules can transfer data on separate lanes but on the path. This results in overall faster performance. Since the single channel has a lone path of 64 bits, dual channel doubles that. There are now two lanes with 64-bits in each traveling on the same path. Hence, obviously, the bandwidth will be higher.
Do These Channels Have An Actual Impact?
If you get Dual channel memory for your system, it’s going to be faster because it has more bandwidth than a typical single-channel configuration. A dual channel kit will be noticeably faster in a lot of CPU dependent games, where RAM also plays a major part. Now, this won’t be noticeable in day to day tasks such as browsing or streaming. But if you’re even a bit of a power user, you’ll appreciate the extra headroom.
In benchmarks, it’s quite clear that dual channel memory is definitely viable for productivity. Video rendering is actually much quicker and some games even have better performance. So if you’re somewhat concerned about productivity dual channel is definitely compelling. In the end, it comes to down to price, if you can find a decent dual channel kit at a good price, definitely pick it up.
If you’re having a hard time deciding on RAM, we wrote a review on some of the best rams for gaming.
One last word of advice, if you end up getting a dual channel kit, make sure you install in an actual dual channel configuration. You won’t be getting the full potential out of your shiny new RAM if you use a single channel. Motherboards clearly label different channels on the motherboard. The BIOS also warns you if you install dual channel kits into a single channel. So using it in single channel makes no sense at all.
|
OPCFW_CODE
|
Zoom camera calibration in ROS?
How would you tackle the task of zoom camera calibration using standard ROS tools? Both the calibration process itself, and camera_info publication afterwards? I haven't found anything in camera_calibration or camera_info_manager.
If I understand it correctly, you have to estimate the dependency of the K matrix on the current zoom level using a function. The dependency of K on focal length is IMO linear (as long as we ignore skew and the other odd terms).
There are several quantities that can be reported by the camera:
"zoom ratio": then I'd just multiply the f_x and f_y terms of the K matrix with this ratio (assuming the camera was calibrated with ratio 1)
focal length: I'd simply put the focal length in the matrix
field of view: I can estimate sensor width from the calibration, and then use it to get the focal length (2*atan(0.5*width/f)).
This of course ignores radial (and other) distortion, which may also by influenced by the focal length (at least so do I think).
Then I'd publish the updated camera matrix to the appropriate camera_info topic.
What do you think about this approach? Has there already been something doing a similar task?
Originally posted by peci1 on ROS Answers with karma: 1366 on 2016-04-26
Post score: 3
Comment by ahendrix on 2016-04-26:
That seems like a reasonable approach, and the right way to integrate with ROS.
Comment by ahendrix on 2016-04-26:
Most of the image pipeline nodes will probably be fine, but you will probably hit edge cases where some nodes assume the camera_info is fixed, and only use the first message instead of subscribing.
Comment by peci1 on 2016-04-26:
Yes, and I'd have to avoid all loadCalibration and saveCalibration calls in the camera info manager (otherwise the "zoomed" matrices would get saved). This is in no way optimal, but I'm afraid we can't do better...
How about calibrating at several zoom positions and then validating your single calibration model against the experimental results?
Or, given that you already have multiple calibrations, maybe all other in-between calibrations could be a linear interpolation and you don't need a model at all.
One thing that will benefit you is that distortion ought to go down with greater zoom, so even if a model or interpolation scheme is not quite right for distortion as long as the distortion coefficients are getting smaller their effects are diminished.
You'll want to make a special node that subscribes to the camera image and then publishes the camera_info with the same timestamp (and rest of the header), some nodes are going to be subscribing to the Image and CameraInfo with a synchronizing message filter.
You may have to remap the fixed camera_info the camera driving is trying to publish to another topic so the namespaces are correct, lots of nodes only subscribe to /foo/camera_info if /foo/image_raw is the image topic.
Do you have a camera with an encodered zoom, or the actuated zoom is sufficiently repeatable this ought to work open loop only knowing the commanded zoom position?
I'd really like to hear what you end up discovering, please update this page with results.
Originally posted by lucasw with karma: 8729 on 2016-04-26
Post score: 3
Comment by peci1 on 2016-04-27:
See https://github.com/ros-perception/camera_info_manager_py/pull/11 . I've implemented both ways. If you have any ideas how to do it better, please tell me!
Comment by lucasw on 2016-04-27:
Cool! I'll have to try that out. Setting up a software test with a rviz/Camera would be interesting. A real world test involving checkerboards (probably at least two- a big one for the wide angle and a small one for the zoomed in telephoto) would be good to do too.
|
STACK_EXCHANGE
|
Give a technician a hammer, and soon she'll see nails everywhere.
Why use other tools, if a hammer works so well?
This is pretty much the position that HTTP is in, and this is far
from well-deserved. A healthy Internet requires a plethora of protocols,
all optimised for their particular purposes.
Give a technician a hammer, and soon she'll see nails everywhere. Why use other tools, if a hammer works so well? This is pretty much the position that HTTP is in, and this is far from well-deserved. A healthy Internet requires a plethora of protocols, all optimised for their particular purposes.
Most of the things we do online run over HTTP, aka "the web". We tend to think of this as a properly standardised protocol. In reality, this is only true in a very limited sense; HTTP is at its heart a protocol for pushing and pulling documents from remote locations, lardered with little more than a MIME-type, filename and language.
This model is poor in many ways, and leads to problems that are a testament to the use of an unsuitable protocol; but problems have been patched with things like WebSockets and HTTP Events to keep it afloat. But at its heart, HTTP is not suitable for much it is made out to be today. Or has anyone spotted offline webmail reading, cross-site integration of travel booking, and editing local files?
Shrink-wrapping data with code can also be seen as a way to circumvent proper specification of the data. And that is expensive for users — it means that they cannot use their own software to operate on the data; not unless they are willing to figure it all out by themselves, and risk future breakage when the data format changes. The end result is that web-based access to whatever data source leads to one-sided automation which is prevented when properly specified and purpose-specific protocols are used. Since HTTP is not specific to the purpose at hand, it should not be used for everything. HTTP is as good a tool as any hammer, but not everything is a nail; there are no one-size-fits-all protocols.
There are many counter-examples, all of which are being tried over HTTP and all of which have led to problems; these problems have been addressed in a somewhat generic manner, and still less useful than anything purpose-specific:
- chat is best done using XMPP or the older IRC protocols (problem: HTTP pulls documents, but messages may need to be pushed downstream)
- telephony, with or without video, works best over SIP (problem: realtime traffic is best sent outside of connections like the one HTTP maintains)
- data is best passed over LDAP (problem: data definitions and syntaxes are local and undefined when using JSON)
There is a growing tendency to specify APIs on top of HTTP. This does indeed address the lack of specification that stems from HTTP's generic nature; at least when properly worked out. But it comes loaded with the properties of HTTP, which are not always advantageous:
- the need to escape characters
- the requirement to parse with a security mindset
- lacking reasonable support for binary strings
- the unnecessarily strict request/response lockstep
- a very low threshold, possibly leading to lower quality specifications
- open to "not invented here" variety: incompatibility without a need
- very often, localisation of a service is not standardised
The problem of localisation of a service has more impact than one may expect on first sight. It is not easy, for example, to use identities under one's own control (like when using one's own domain name) and announce the interface that we currently rely on for HTTP-based communication, which may change when, say, a user agreement changes to our disadvantage. The generic nature of HTTP and its resulting versatility makes it unlikely that such facilities will indeed be made generally available; but localisation facilities have been defined for most purpose-specific protocols. This is usually done through DNS records, which are relatively easy to define given full awareness of the purpose at hand.
The HTTP approach to the localisation problem is simply to have one "central" service and use it for the whole World. This leads to centralisation of the Internet and its protocols, which is generally bad for users and their privacy; their communications now belong to the website owner, and it is usually not possible to escape without breaking communication bonds. Though HTTP may be pleasant as a wrapper for special purposes, it is easily beaten by the higher level of automation and retained control of purpose-specific protocol implementations. This control over one's own communication usually does not cease when using a large-scale deployer, simply because that is not how purpose-specific protocols are designed; they are supportive of large-scale hosting, but not to selling one's soul.
In conclusion, HTTP is a powerful tool to exchange documents as-is, but it is generic in nature. This is perfect for that level of abstraction, but it turns into poverty when we try to stretch the model beyond its capacities. Even though it has been put on steroids with many new extensions, these too are kept general where specificity would be desired, and at the same time it can be too specific to capture a protocol's requirements well. The tendency to abandon purpose-specific protocols for hacks on top of HTTP is not a sign of proper engineering.Go Top
|
OPCFW_CODE
|
[Date Prev][Date Next]
LDAP Bind request seen many times in network trace
- To: OpenLDAP-software@OpenLDAP.org
- Subject: LDAP Bind request seen many times in network trace
- From: Srinivas Cheruku <email@example.com>
- Date: Thu, 16 Feb 2006 20:15:25 +0530
- Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:user-agent:x-accept-language:mime-version:to:subject:content-type:content-transfer-encoding; b=O9p2z8jCb91+fyUXrfySrOHtMBmGnRvkzEhM6sRDFq0BGjQvBj7ha9Qsk621kaqd42MgvKCT3ooo+iMWeO5zj2uAEaypXGq3GuXhcQtHXTj+HngldSY0gYkxwVKSSn9ly+QbXmlW4eH9juts9vzU2XUpZ28abA1lGP6SBMV0Mak=
- User-agent: Mozilla Thunderbird 1.0.7 (Windows/20050923)
We have an environment where there are multiple Active Directories with replication in place.
I am using openldap library to connect to the AD to perform ldap operations like search/add/modify/delete using SASL/GSSAPI authentication.
I am able to connect to the ldap severs and able to do the ldap operation like search/add/modify/delete successfully.
But when i check the network trace, I am seeing many LDAP Bind Requests. Is this normal?
Also, I am seeing the bind requests to other ldap servers as well. But i don't understand why openldap is binding to other server which i have never initialised.
I have 3 ADs
The code i have written goes this way
1. I have initialized using ldap_initialize(ld, ldap://server1.test.com:389)
2. Then i have ldap_sasl_interactive_bind_s() for binding to the ldap server.
3. Then ldap_sasl_rebind()
4. lot of ldap operations like search/add/modify/delete.
When i run the code and check the network trace
1. I see LDAP Bind to server1.test.com many times - this is the server on which ldap_initialize is called
2. Also I am seeing LDAP Bind to server2.test.com - I don't know from where it is able to get this ldap server name.
Can you please let me know
1. whether the behaviour observed in the network trace is normal?
2. How can i make LDAP Bind to only one server though i have many ldap servers in my environment?
Thanks in advance,
|
OPCFW_CODE
|
We are moving some code onto a dual Xeon processor machine on QNX 6.21, in a really short timeframe, not enough to rearchitect the threading or smp settings. To best take advantage of multiple processors, what QNX settings should be changed from the default install? This is scientific code that will run on this system for about a week for high performance calculations, then go back to the single processor system.
Only think to do OS wise is swith to SMP kernel. Unless your system use multiple process or thread that do want/can run concurantly you should see only very small difference.
However if possible you could start two instances of the applications and have them work on different set of data.
Personal opinion; for scientific stuff look at other OSes. Some offer better compiler; Intel compiler is freely available for non commercial purposes on Windows and I beleive Linux. Some will also make use of NEMA architecture which can yied improve performance for high memory bandwidth application.
If I split the software into two or more processes - how will QNX schedule these processes amongst the processors? (this is possible although we wouldn’t have time to do it on the thread level) Thanks for the tip…
I tried to enable smp support by using the following command, but this causes the system to lock up in the “hit esc for altboot” screen. Is there any other way to enable smp that will not cause it to lock?
cp /boot/fs/qnxbasesmp.ifs /.boot
I tried to enable smp by copying the file (below), but this locks up the system in the altboot screen. Is there any other way to enable smp that will not cause the system to lock? The processors at the 3.4Ghz Xeon Noconas.
cp /boot/fs/qnxbasesmp.ifs /.boot
With SMP, CPU becomes a resources the OS will try to make best use of. Hence it will assign a process to run on an available CPU, being a thread or a process. A process is in fact a program with a single thread
As for not booting. The motherboard must support SMP version 1.2 I think. Aside from that I don’t really have an idea. Try pressing space to go in verbose mode to try to figure out what is wrong. I beleive there was some bug in 6.2.1 concerning SMP and certain hardware config.
The boot get as far as showing a bunch of periods … to the end of the screen before it feezes, so it doesn’t get far enough for the space bar F6 verbose option.
I found a post from jjpedroza with a similar smp boot freeze, although his system went a further to the login screen -
I am using the vesa video driver, with 6.2.1B.
I tried disabling hyperthreading in the bios, and experimented with turning off other cpu options. I thought maybe I could find something in the bios to trick QNX into thinking this is an older machine.
The motherboard and CPUs are the latest available so they should have good support for SMP.
I dug this up from the release notes for 6.2.1B:
If you try to use the VESA driver on an SMP machine, your machine might spontaneously reboot.
This might not occur on all SMP boxes.(Ref# 15928, 16009)
As a workaround:
Make sure you’re using the proper video driver for your card.
If the problem persists, run the non-SMP version of procnto.
This was fixed in 6.3.0. I hope this helps.
The onboard chip is Rage, so I changed the video to Rage and it still hangs on smp boot. I wonder what the OS is doing at the time right after it says hit esc for .altboot…
Probably 6.2.1 does not like this motherboard/chip combination.
In verbose mode, right before freeze the following is displayed:
system page at
starting next program at vf000ef08
header size 0x009c
total size 0x0978
#cpu = 4 (disabling hyperthreading in bios shows 2)
type = 0
I doubt the video driver makes any difference as I don’t beleive the boot process even makes it there.
I seem to remember Rage/SMP/QNX6.2x has frequent lockups. Can you try to use the non-SMP image or try 6.3? Other people has SMP problems that got resolved by QNX 6.3.
I installed the 6.3 eval, and now SMP is working. I see the extra processors in the performance monitor. Thanks for the tips. I think that the hardware is newer than what 6.2.1 will handle.
|
OPCFW_CODE
|
Is Stack Overflow a social networking site?
I know it's a Q&A site, but does it fall under the umbrella of social networking? There's a raging debate here at work, so I thought I'd put this question to the community.
I'd like to go on the record as thinking that this isn't a social networking site!!
If they're considering blocking social networking sites (pure speculation on my part, which tells you the kind of place that I currently work at) then I'd argue very strongly that SO is not a social network.
good point @Bill, fortunately they're not considering that. StackOverflow & the rest of the StackExchange sites are the few that aren't restricted by WebMarshal (or as it's otherwise known, WebNazi)
Social networking on a site frequented almost entirely by geeks? Seems counter-productive to me...
if you want counter-productive, you should go to reddit.com
@DaveDev: To your question: Look here. That's why
@genesis-φ: thanks - makes sense. I'll modify the question to remove the edit (an enquiry into why this was getting attention today, almost a year after after it was posted)
BTW where did you spot that your question is "getting" more attention?
My notification bar at the top was lighting up any time someone answered a question or added a comment.
I don't have access to read this full IEEE Software article, but its public Abstract says [former] Stack Overflow CTO David Fullerton was interviewed on the topic of Social Networking Meets Software Development - maybe someone with access may be interested to read it in full to see if David Fullerton weighed in with any explicit opinion.
According to Wikipedia:
A social network is a social structure made up of individuals (or organizations) called "nodes," which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, common interest, financial exchange, dislike, sexual relationships, or relationships of beliefs, knowledge or prestige.
So I'd say no. It would seem like connections between the nodes is the definitive feature of a social network and we don't have that.
Agreed. The only "connections" SO really has are tags, and those link questions, not people.
"a social structure made up of individuals [...] which are tied (connected) by [...] common interest..." Does that not describe SO?
@Purmou In that way, forums are also social media, as are many sites that have users and discussions. Actually, now that I think about it, are forums social media?
@Charlotte according to Wikipedia, yes: https://en.wikipedia.org/wiki/Social_media#Definition_and_classification
No, it is not. It's a Q&A site.
We can't connect to friends.
We don't have a means for private messages.
We don't share pictures of our dog, kids, or house.
... although sometimes we share pictures of ourselves in pyjamas ;)
Point, but luckily there isn't a badge for that ;-).
for newcomers: the easteregg Jon referred to :)
I like this answer.
I should have just retweeted this answer. It says basically the same as mine.
I like your comment, @Bill the Lizard.
I like the fact that Daniel likes our comments and answers. ;-).
gamecat likes this
I would just like to disavow any responsibility for, interest in, authority over or knowledge of Jon's post. Well, the "knowledge of" part ended 30 seconds ago, but the rest is still true.
@Pop: it was bound to happen :p
For how long have Web2 and social network meant posting private pictures?
the last 2 are doable on chat rooms though
@caub, shush. You'll get chat filtered, and we can't do without chat.
Technically we do have DMs?
I'd call it an anti-social networking site.
Only if you equate social to fun. Or if you're on Math Overflow, I guess.
I would post a comment to this, if I didn't hate talking to people so much.
@gno: But you just posted a comment... or is it OK as long as the comment is hostile in nature?
Antisocial would be: maintaining links with people we don't like, posting compromising pictures of them and sending hate mail. You can call it Disgracebook, or flunkedOut.
Its not a social website period, and it doesn't actively fight against social network. Its purely a Q & A site.
Stack Overflow forms a social structure, wherein we have patterns that define our relationship to the site and to each other.
While social networks almost always exist within every social structure, social structures need not formalize or recognize the networks that form within them. Stack Overflow is a good example of a social structure that does not recognize the social networks that are formed within its cultivated social structure.
Social networks come in a variety of forms, and for very broad definitions Stack Overflow comes close. However, one of the defining characteristics common to nearly every social network is that people define specific interpersonal relationships within the social structure.
People are forming such relationships within Stack Overflow, but the site and software do not formalize these relationships.
I wouldn't say that Stack Overflow is a social network until the site and software themselves formally recognize those relationships.
In fact, far from supporting social networks, Stack Overflow has a few features that discourage social networks from forming within its social structure. If you want to talk to a given individual, your only on-site option is a public comment on one of their posts. While you and someone else may share in common your knowledge of Ruby, unless you encounter them off site you may never know that you also share the enjoyment of, say, painting. Chat fills the gap a little bit, but only for those that choose to participate.
I'm satisfy with this answer. Adam, can I ask you if you have good knowledge about sociology?
@Ooker No, while I'm interested in social systems that humans form, how they form, and how they can be guided, I have no particular expertise in them.
If you are interested in them, that's enough for me. I want to ask if I add this feature on my own browser, will it makes SE become a social network site for only me?
@Ooker In some small way, yes. It won't change how others use the site at all.
It is not a social network due to the main ideological rule on which Stack stands - it is the value of question and value of answers you should care about. Nothing else really matters. Personalities just doesn't matter.
Stack Overflow does not provide a non-public means of user-exchange. Thus I guess you can apply a lot of pre-existing analysis of twitter user exchange to SO.
The public-only means of communication definitely helps the aspect of transparency a lot.
The above answers give an academic, textbook-like (and dare I say, inconsequential) analysis to the question. I'd like to offer a practical perspective that business strategy planners (such as Facebook's CEO) should care a lot more about.
Anecdotally, I get my fill of bonding with other human beings via Stack Overflow to the point I am not as compelled to use Facebook to partake in this attention economy trade. Seeing one of those red balls in the Stack Overflow menu bar gives me the same acknowledgement of my existence by other beings, which the human evoloutionarily (?) craves, as those red alert sprites that Facebook's menu bar does.
This is significant from a competitor analysis point of view. Stack Overflow is a substitute product for Facebook within a certain demographic. So I would say yes it is a form of social networking.
This answer is off-topic.
There may be some "social networking" happening on Stack Overflow, but that does not make it a social networking site any more than Amazon is one. What makes a social networking site is the functionality and focus it has.
I think it is ontopic, but unclear.
|
STACK_EXCHANGE
|
Decision Tree construction algorithms have usually assumed that the class labels were categorical or Boolean variables, meaning that the algorithms operate under the assumption that the class labels are flat. In real-world applications, however, there are more complex classification scenarios, where the class labels to be predicted are hierarchically related. For example, in a digital library application, a document can be assigned to topics organized into a topic hierarchy; in web site management, a web page can be placed into categories organized as a hierarchical catalog. In both of these cases, the class labels are naturally organized as a hierarchical structure of class labels which defines an abstraction over class labels.
The HLC (Hierarchical class Label Classifier) algorithm is designed to construct a DT from data with hierarchal class labels. It follows the standard framework of classical DT induction methods, such as ID3 and C4.5. The HLC process is:
1. Attribute Selection Measure
An attribute selection measure is a heuristic to measure which attribute is the most discriminatory to split a current node. In view of the weakness of the traditional entropy-based method, a new measure is proposed, called hierarchical-entropy value, by modifying the traditional entropy measure. It can help measure the appropriateness of a node with respect to the given class hierarchical tree.
2. Hierarchical-entropy value
The hierarchical-entropy value of a node vb can be denoted as Hentropy (vb), and can be computed with the following equation:
This result indicates that by using the new hierarchical-entropy measure, the appropriateness of a node in a CHT can be properly measured.
3. Hierarchical Information Gain
Next, decide whether an attribute is a good splitting attribute according to how much hierarchical information is gained by splitting the node through the attribute. First define what the hierarchical information gain is, and then use it to develop a method for choosing the best splitting attribute. The hierarchical information gain is used to select a test attribute ar to split the current node vb, and can be denoted as H-info-Gain (ar,vb). Let vx denote a child node obtained by splitting vb through test attribute ar. The hierarchical information gain can then be computed as follows:
The goal is to select the attribute with the highest H-infoGain value, which is chosen as the next test attribute for the current node.
4. Stop criteria
For the labels at the bottom level, let majority be the label with the most data in vb, and let percent (vb, majority) be the percentage of data in vb whose labels are majority. If the following conditions are met, stop growing node vb; otherwise, the node must be expanded further.
Once stopped, assign a concept label to a stop node vb to cover most of the data without losing too much precision. This is accomplished by using the function getLabel, detailed in the next section.
5. Label assignment
A heuristic for assigning a class label when a node matches the stop criteria. The goal is to use the concept label in the class hierarchical tree with the highest accuracy and precision to label the node. Each concept label of a leaf node vb has three vital indices: accuracy, precision, and score.
Applying the function getLabel to the information you can obtain the accuracy, precision, and score values for each concept label of nodes v1 and v2.
An empirical study was done to demonstrate that the proposed method can achieve a good result in both predictive accuracy and predictive precision. Two real-world data sets were used in the experiments. Two criteria were used to compare the performances of different algorithms: (1) accuracy, which is the percentage of future data that can be correctly classified into the right concept label, and (2) precision, which is a measure of specificity of the label data can be assigned to.
To highlight the tradeoffs between predictive accuracy and predictive precision of the resulting hierarchical concept labels, and to explore the performance of our proposed method, we chose a well known decision tree system, C4.5, to compare with our method, HLC.
The HLC algorithm can achieve higher accuracy and better precision simultaneously. By carefully examining the C4.5 performance data level-by-level, it was found that when accuracy is high, precision becomes low, and vice-versa. This is because the traditional decision tree classifier does not consider data with hierarchical class labels, and thus it cannot handle the tradeoffs between accuracy and precision. On the other hand, since the aim of the project was to construct a DT from data with a class hierarchical tree, the ability to optimize the tradeoff between these two criteria is embedded into the design of the algorithm. The results in Fig. 14 indicate that the design achieved this goal.
The HLC algorithm can be used when evaluating hierarchical data as it will likely allow for more accurate and precise analysis.
Chen, Yen-Liang; Hu, Hsiao-Wei; Tang, Kwei (2009). Constructing a decision tree from data with hierarchical class labels. Expert Systems with Applications. Volume 36, Issue 3, Part 1, Pages 4838–4847.
|
OPCFW_CODE
|
Welcome to Song Encoder, a special series of The Changelog podcast featuring people who create at the intersection of software and music. This episode features Pwnie Award-winning songwriter Forrest Brazeal.
Welcome to Song Encoder, a special series of The Changelog podcast featuring people who create at the intersection of software and music. This episode features $STDOUT and contains explicit language.
Webamp.org’s visualizer, Butterchurn, now uses WebAssembly (Wasm) to achieve better performance and improved security. Whereas most projects use Wasm by compiling pre-existing native code to Wasm, Butterchurn uses an in-browser compiler to compile untrusted user-supplied code to fast and secure Wasm at runtime.
AI is being used to transform the most personal instrument we have, our voice, into something that can be “played.” This is fascinating in and of itself, but Yotam Mann from Never Before Heard Sounds is doing so much more! In this episode, he describes how he is using neural nets to process audio in real time for musicians and how AI is poised to change the music industry forever.
Tenacity is an easy-to-use, cross-platform multi-track audio editor/recorder for Windows, MacOS, GNU/Linux and other operating systems and is developed by a group of volunteers as open source software.
Sound familiar? Maybe because it’s a fork of the historically awesome Audacity project that promises:
no telemetry, crash reports and other shenanigans like that!
This is a curated list of my favourite music DSP and audio programming resources. It was originally meant to be an official “Awesome list”, but apparently you are not meant to write in the first person, so it is now a “more awesome” list.
The music you hear is generated in your browser by a randomised algorithm, below you can see the notes and parameters that are currently in use. You can also interact with various parameters and buttons manually. The green autopilot switches change how automatic playback is. Leave them on for a lean-back experience. Buttons labelled ⟳ will generate new patterns. Source Code is on GitHub.
Daniel Jeffries’ wildly popular Learning AI If You Suck At Math series is back after a 3-year hiatus. In part 8, Daniel asks (and answers) the question: Can AI make beautiful music?
Music Time brings the power of the Spotify player to your code editor. Control your music, view and create playlists, favorite and repeat songs, and discover new music without context switching to the Spotify web or desktop app.
Music Time is free and works with VS Code, Atom, and JetBrains IDEs. Some of its features require Spotify premium, but the personalized song recommendations work with the free version of Spotify as well. It even has a cool vizualizer so you can see your most productive songs.
Musicians and developers go together like peas and carrots, Jenny. So it makes sense that techniques used by musicians to hone their skills might transfer over to software people. One of those techniques is the “masterclass”
A masterclass is a format in which musicians perform a work for an established artist and the artist then gives them feedback rather like a lesson, except that all of this happens in front of an audience.
Click through for a compelling distillation of what software teams can learn from musicians when it comes to giving and receiving feedback.
What does this have to do with coding, you ask? Ambient music, IMHO, is the best music to code to. I’ve been enjoying this list ever since it hit my radar the other day, so I thought I’d pass it along.
The roots of ‘view source’ live on, in an incredibly realized form. (In Beaker, you can right-click on Duxtape and ‘view source’ for the entire app. You can do this for your mixtapes, too. Question: When was the last time you inspected the code hosting your Webmail, your blog, your photo storage? Related question: When was the first time?)
It’s hard to see a world where apps like this get mainstream adoption. On the other hand, what other choices do we have? 🤔
This site is a collection of generative music pieces which can be listened to. The term “generative music” has been used especially by Brian Eno to describe music which changes continuously and is created by a system. Such systems often generate music for as long as one is willing to listen.
Push play and code away.
JS Party panelist, Feross Aboukhadijeh:
In the days of Geocities and Angelfire, a quirky HTML tag called ⟨bgsound⟩ enabled sound files to play in the background of webpages. Usually, these files were in the MIDI format. What a glorious era that was! Sadly, ⟨bgsound⟩ has been removed from browsers and MIDI is obscure and hard to play back. In this talk, we’ll bring MIDI and ⟨bgsound⟩ back from the dead using WebAssembly, Emscripten, Web Audio, and Web Components. When we’re finished, you’ll be able to give your webpages the 90’s treatment in a modern, standards-compliant way!
How do MIDIs even work? Why won’t they play on the web anymore? Can WASM save the day (hint: yes)? How does Feross get so many eyeballs on his creations? Is Preact awesome for building sites like this? What’s the future of BitMidi look like? Don’t ask us, listen to the episode!
JS Party podcast host Feross built a new web app – BitMidi – for listening to free MIDI songs. It’s a historical archive of MIDI files from the early web era. This post breaks down why and how he built the site.
Oh, and since you’re probably wondering, the answer is “Yes, there are hundreds of Zelda songs on BitMidi!”
Jordan Eldredge has been working hard to make Webamp even more rad:
Take a trip down memory lane with this faithful WebGL port of MilkDrop, the iconic music Winamp visualizer.
Check it out in Chrome and Firefox. What should you listen to while the visualizer does its thing? Our episode all about Webamp, of course. 🤓
Like JSFiddle, but for ChordPro chord sheets. I’m no musician, so I’m not embarrassed to say I had to google to learn ChordPro is an ASCII text file format for transcribing songs with chords and lyrics.
|
OPCFW_CODE
|