Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Eliminate as many distractions as feasible. Set your mobile phone away, get away from your Pc, and make your natural environment as peaceful as you possibly can. Offering homework your undivided notice will essentially enable it to be easier, simply because your thoughts will not be balancing distinctive tasks concurrently.
Should you make this happen when, no engineer will at any time give you bogus estimates yet again (you've referred to as bull*). Having said that, according to Sam's persona, this could cost you connection details with him, so get it done as tactfully as feasible, and only when essential. Good lead programmers should be contacting estimate bluffs on their own, however, if they don't, it's up to you.
wikiHow Contributor Complete your homework on time and get good grades. Speak up and remedy questions in school. Talk to about additional credit rating. Behave yourself in school And do not enter into any trouble.
My moms and dads hardly ever care about my problems, the lecturers don't concentrate to my challenges, And that i am a loner. Will asking an more mature brother do?
Hmmm, appears to be you've got currently signed up for this course. Whilst you're here, you could also look at many of the amazing firms that happen to be hiring like outrageous right this moment.
Identify Out there Assets: What people today, products, and funds will you have got accessible to you to go right here realize the project targets? For a project supervisor, you always will never have direct Charge of these sources, but will have to regulate them by means of matrix administration.
What I Unquestionably wouldn't do is finance A significant enhance with a household if it puts it outside of the selection of comps in an area.
Typically, these requested lists have just one important line dividing them into two parts. The highest part is priority 1: points we have to do and cannot maybe succeed without having.
It doesn't matter how disappointed there's a chance you're, it’s essential to retain your tone Skilled and laser-centered on getting an answer to The problem as quickly as possible.
If this doesn’t transpire and all you find are horror tales, a minimum of you’ll recognize that you’re much better off with out that gig.
My cousin’s officemate wanted to impress their manager by tackling a completely new project, so he questioned my cousin to shoulder a daily duty that he wouldn’t have enough time for due to the even bigger-and-much better project. Not interesting.
Check your telephone or your social networking websites in the course of your study crack, but not in advance of. Use these interruptions being a carrot, not like a pacifier.
I had to do my C++ project and simultaneously to get ready for my tests. It was simply just unattainable to deal with! I was 100% certain that nobody would do my C++ homework for me over the internet but thank God I was mistaken! From now on you will be the one one supply I'm able to have faith in to complete my C++ assignment to. I’ll be back again subsequent semexter!”
Nonetheless, a money-out is sensible in a few eventualities—particularly when your recent mortgage loan fee is far bigger than what you can get these days.
|
OPCFW_CODE
|
How to make one method be called by some classes, but the rest could not?
There is one class named ClassA, it has one public function named FunctionA.
There is another class named ClassB, it needs to use FunctionA and it is not a subclass of ClassA.
The third one named ClassC, FunctionA should not be called by ClassC and it is not a subclass of ClassA.
In addition, the relationship between ClassB and ClassC is not inheritance.
If there are some solutions provided? or if there are suitable design patterns?
Thanks for help.
Here is a related question which might help you.
Hi @JornVernee, you helped me. From your link, I got the most appropriate answer from Salomon BRYS. Thanks.
You can place ClassA and ClassB in the same package (and ClassC in the other) and use package-private (or default) access modifier for method FunctionA.
This solution is the easiest and uses only JLS specs (works for any language level and on any JVM implementation):
Example 6.6-4. Access to Package-Access Fields, Methods, and Constructors
If none of the access modifiers public, protected, or private are specified, a class member or constructor has package access: it is accessible throughout the package that contains the declaration of the class in which the class member is declared, but the class member or constructor is not accessible in any other package.
Other ways to deal with your problem - reflection, code generation etc - is much more complex, buggy and slow
PS: also it is possible to leave method FunctionA public and decompose your application into two modules. In the first module you should place ClassA and ClassB and in the second ClassC. First module can use the second as a dependency but the second one shouldn't have an access to the first. This way is more suitable for complex applications and I recommended to use build tools such as Maven or Gradle to handle with such oriented dependency graphs (may be very tricky for large scale apps)
I can not get the point of the last part which tells 'decompose application ...', could you provide some key codes. In addition, what I more want to solve is that Class A and Class B are not in the same package. @Cootri
yeah, just google "maven multi module project" for a starting point then. There are plenty of examples in the Web: http://www.codetab.org/apache-maven-tutorial/maven-multi-module-project/ , http://www.mastertheboss.com/jboss-frameworks/maven-tutorials/jboss-maven/maven-multi-module-tutorial and so on
Maybe the Interface segregation principle, one of the SOLID principles, can solve your issue.
Make class A implement an interface AB that is used by class B and an interface AC that is used by class C. In this way class C doesn't see the methods provided for class B.
|
STACK_EXCHANGE
|
Set constants in NodePattern don't work in ruby 2.4
I was trying to DRY up a pattern for a PR in rubocop-performance and the NodePattern docs suggest that you can use %CONST with a Set. However, given that rubocop still supports ruby 2.4 I don't believe this is a good recommendation, because Set#=== does not exist in 2.4, and the pattern therefore fails to match there.
For example: https://app.circleci.com/pipelines/github/rubocop-hq/rubocop-performance/341/workflows/528ddd46-d4f6-4dc5-883c-fc02cc002540/jobs/2155
The pattern:
CANDIDATE_METHODS = Set[:select, :find_all, :filter]
def_node_matcher :detect_candidate?, <<~PATTERN
{
(send $(block (send _ %CANDIDATE_METHODS) ...) ${:first :last} $...)
(send $(block (send _ %CANDIDATE_METHODS) ...) $:[] (int ${0 -1}))
(send $(send _ %CANDIDATE_METHODS ...) ${:first :last} $...)
(send $(send _ %CANDIDATE_METHODS ...) $:[] (int ${0 -1}))
}
PATTERN
I wonder if we can refine sets for 2.4 inside NodePattern to add #=== to it? Alternately, should Sets just not be used in rubocop patterns until 2.4 support is dropped? I was considering changing my constant to Set[...].method(:include?).to_proc but that just feels ugly to me.
I just released 0.4.0, which does add support for Set#=== in Ruby 2.4 😅
🤦
Thanks, that's great! Are you maybe able to bump the version in the different rubocop subgems so we can take advantage of this?
🤦
I wasn't clear: I just released it... because you opened this issue 😆
Are you maybe able to bump the version in the different rubocop subgems so we can take advantage of this?
No need, if you do bundle update rubocop-ast it should upgrade the version as it is compatible with the subgems
Alright I'll do that in my PR! Thanks!
FYI, I am working on rewriting the NodePattern compiler, and that rewrite should give you the ability to have better unions and also auto-optimize sets...
Yeah I was reading through your PR for that yesterday, great stuff!
Alright I'll do that in my PR! Thanks!
Oh, right, if you do a PR for a gem, then you should have a separate commit that bumps the requirement for rubocop-ast to >= 0.4.0
Oh, right, if you do a PR for a gem, then you should have a separate commit that bumps the requirement for rubocop-ast to >= 0.4.0
Should that be in the .gemspec? Gemfile.lock is gitignored in rubocop-performance (but FWIW CI is installing 0.4.0 now and passing in ruby 2.4).
Oh, right, if you do a PR for a gem, then you should have a separate commit that bumps the requirement for rubocop-ast to >= 0.4.0
Should that be in the .gemspec? Gemfile.lock is gitignored in rubocop-performance (but FWIW CI is installing 0.4.0 now and passing in ruby 2.4).
Yes, gemspec. I see that rubocop-performance doesn't list explicitly rubocop-ast yet, but I'd recommend you add the line nevertheless:
s.add_runtime_dependency('rubocop-ast', '>= 0.4.0')
Thank you for your help!
My pleasure, thank you for your contributions
|
GITHUB_ARCHIVE
|
Our team consists of experienced research scientists as well as software developers, we believe this combination is vital in scientific software development to ensure requirements are fully captured and that ultimately customers are provided with complete solutions. Many scientific software teams lack sufficient understanding of the scientific question which they are addressing.
Data analysis and processing
TANGO and EPICS
Let us help you out with that!
Portfolio: ISPyB’s SynchWeb
Quantum Detectors develop and maintain the SynchWeb interface for ISPyB
SynchWeb is the most advanced and complete interface to the Macromolecular Crystallography (MX) database ISPyB. This is currently deployed to all MX users at Diamond Light Source. Read More. Source code is available on github and more extensive information can be found on the github pages
Portfolio: XRF Web
XRF-Web is a simple web application for inspection and fitting of ascii fluorescence spectra, demonstrating the type of functionality possible with modern web based technology.
Portfolio: Xray Utils
Opening the web app will cache it on your device, whether you run it on iOS, Android, on desktop or on laptop once loaded you will have instant access to it regardless of your internet connection. The utility is composed of four parts:
Status – shows the current storage ring monitors of several synchrotron facilities around the world. Shake your device if you’re bored to display one at random! A bunch of the links that we used before are broken and we will continue to update these as we make improvements. Can’t find the synchrotron that you use? Contact me (and if you know the web address for the status page that’s even better!) and you just might see it in the next release.
Elements – Will display the general atomic properties, absorption edges, and fluorescence lines (with yields) for elements hydrogen through californium. A must for the x-ray absorption experimenter or microprobe user. Can also click on the “filter” button to show you the elemental filters that could used for scatter reduction in an EXAFS experiment.
Absorption – Don’t know how much your kapton windows absorb at Cu? Wondering if that meter of air path will absorb all your x-rays? The x-ray absorption calculator will tell you! This utility Will calculate the absorption length and total absorption of any compound, given the x-ray energy, chemical formula, density and thickness of the compound of interest. Includes a list of common compounds, and you can also add your own.
Chambers – Ever wonder how much X-ray flux is really in your experiment? By entering the basic properties of the ion chamber in your experiment (gas composition, pressure, energy, chamber length, measured voltage and amplifier gain) you will get the x-ray flux in photons per second.
|
OPCFW_CODE
|
How to Automate The Data-Entry Process of verifying Excel spreadsheet data with web research
I have a huge spreadsheet (about 150K rows) with data on businesses. The info is laid out with varying info on each business in columns. Example: business name, website, phone number, email, social media handles, address, latitude, longitude, etc.
I obtained this info over time via individual, team and web-scraped data entry. Not every business has each category of information present. Now I need to verify each cell in each row to ensure accuracy and fill in any missing info if possible. Here's an example:
spredsheet snippet pic
In other words, I need to execute the simple, painstaking task of checking google to see: does this business exist/is it still in operation? Of the information I have, how much of it is correct, and what is inaccurate? From the information I am missing, is it available somewhere online? For EACH BUSINESS! :-(
I need to find a way to automate all of the above. Paying someone to "clean" 150K rows of data or doing it manually is out of the question. And I haven't been able to find any VBA or formula solution.
Please tell me the technology exists to solve this problem and that someone can point me to it!
Without showing what you've done, it would be difficult to assist you with your work. Otherwise, it would be unpaid work that you are looking for. In your words, that would be out of the question.
Sounds like a pretty huge task - have you considered using commercial sources of this type of data? They must exist....
@Mech I just edited my question to include a snippet from the spreadsheet. Not looking for freebies, just sound advice, or some expert to invite me to fiverr/upwork etc so I can get this done if I personally cannot
@TimWilliams my spreadsheet is a compilation of individuals and group input over the years and some web-scraping. In other words, not all of the data corresponds to a nice, clean commercial source url where I can ID. Some of the businesses might no longer be in business, or I might have bad info or am simply missing info that does exist somewhere online
@Mech I need another up vote to chat in the room! Lol
@Mech I just tried to post another question, but the site says I must wait 90 mins! :'(. Might just have to check back in with this tomorrow!
This could be programmed, but it would be a specific programme designed for this work. It's a huge job as I assume data is missing in any column, apart from business name.
@Michelle correct. I figured some manner of programming would do the trick. I just need to know what type. The toughest part of this task is not knowing what to ask for at all, in regards to the tools to do the job. Any guesses to the language needed? I've heard python and javascript might be appropriate
@tawconsu I've got something in the works that would help fill in the missing data.
|
STACK_EXCHANGE
|
Django, how to implement a field that will be filtered in every query
I have a django model that has a field with following properties:
Now implemented using CharField, max_length=1
field is enumeration with 4 choices A,B,C,D
90% of objects will have value A, rest are evenly B,C,D
in 90% of the queries B,C,D will be filtered out, showing only objects with A
There will be maybe 10000-50000 objects
Load is 95% reads with some updating and creating new objects
This model is central to my app, so practically all pages have list or detail view.
So, 90% of the time I am filtering out the same 10% of the objects (+ whatever the query is doing above this basic filtering to get me 10-100 objects to display). What would be the best way performance wise to do this in django? (Optimizing one query might not be worth the effort, but somehow it seems wasteful to keep filtering the data same way all the time...)
simply use filter()? Filtering A might be minor cost anyway assuming the query optimization will throw away lots of objects before differentiating between A and rest of the objects
filter + index for the field?
filter + index + implement the field with some specific data type to make indexing work better? What type would be ideal?
Multi-table inheritance with dummy base + A(base), B(base), ...? Objects are handled identically, but child class tables wouldn't require filtering for that field.
Something else?
DB is MySql with InnoDB tables. I am planning to do tests with dummy data to compare implementations, but I would appreciate any feedback and links to relevant info. View code adds User/Profile dependent things to output for each object (i.e. rating user may have given) and so I am not sure how much I can cache.
Thanks for replies! Not optimizing before it's slow makes sense. The only thing I am worried is that when things start to slow down it might be more difficult to change anything. Seeing that you are okay with simple filter + index makes me feel safer.
There's two parts to this question: what's better in terms of efficiency, and how to write the code.
In terms of efficiency, there's no need to do anything than ensure that the database indexes include the field you'll be filtering on. You'll probably need to do at least some of that manually: use something like the django-debug-toolbar to show your queries, and create the composite indexes you need for those queries.
For the code, the best bet is to create a custom manager with a method that filters the A objects only:
class MyManager(models.Manager):
def only_as(self):
return self.filter(myfield='A')
...
MyModel.objects.only_as().filter(whatever=whatever)
I think you should not optimize anything until it is slow. For this size of data I'll try to use filter + index field. I don't know how many requests per seconds do you have.
If it is not fast enough, idea with separate tables looks reasonable.
Also you can try to use a kind of cache, to split your data by key (A, B, C, D).
But my advice is to use simple solution, until it is not acceptable.
|
STACK_EXCHANGE
|
UEFI Lenovo Z580 Ubuntu Install
I've been trying to dual boot Ubuntu 12.10 alongside windows for some time now on my Lenovo Z580 laptop and, as much as I hate to admit it, need help doing so through UEFI. I've been using this guide (https://help.ubuntu.com/community/UEFI) from the Ubuntu community while attempting to install it.
After I finished going through the install process using the USB (which boots properly), I get the error from SecureBoot stating that the bootloader had failed to verify with access denied.
As such, I ran boot-repair, as recommended by the forums and the community guide, with the following result:
Any help that you could offer regarding this would be greatly appreciated! Please let me know if there's anything else I can provide, and I would gladly do so.
Thank you in advance!
UEFI partition: sda2
Ubuntu partition: sda7
bootloader partition (specified during install): sda2
Re: UEFI Lenovo Z580 Ubuntu Install
Welcome to the forums.
Have you from UEFI menu booted the ubuntu entry? With both secure boot off or on? And have you turned off fast boot as that may cause other issues.
Some computers will only boot the Windows entry (or Redhat).
Lenovo ThinkCentre M92p only boots Windows or Redhat.
Other Lenovo's that have worked.
Lenovo Ideapad Y500 LiveUSB Problem
Older, so no secure boot issues.
How to install Ubuntu 11.10 on a Lenovo (U)EFI system (tested on S205, B570)
Because some UEFI systems only boot the Windows efi files, the work around is to rename the grub file to the Windows file name. Back up entire efi partition first.
Sony - manually copy grub efi files & rename to make them work post #3
Sony - Manually copied but still some issues.
And Boot-Repair under advanced options should offer to backup & rename files.
Boot-Repair - Updated Jan 1, 2013 to not rename first time, but rename if first time Windows does not boot. Post 706
Boot-Repair copied /EFI/ubuntu/grubx64.efi to /EFI/Boot/bootx64.efi (in case the BIOS is hard-coded to boot into /EFI/Boot/bootx64.efi or secure boot
signed GRUB file shimx64.efi.
Last edited by oldfred; January 12th, 2013 at 01:37 AM.
Tags for this Thread
|
OPCFW_CODE
|
New to Unity, so I apologize if this seems naive.
Problem Details: I have been running a demo of Unity 3.1(6) in a test lab for a
little while, and have feel that it is now stable, and have convinced the
powers that be that it is worth our time and money to roll it out. We purchased
a fully licensed copy of Unity 4.1, and want to upgrade.
Because we are upgrading from a demo version, the option buy an upgrade license
was not possible. So we purchased a full license. However, I need to upgrade
from 3.1(6) to 4.1 as seemlessly as possible. Is it possible to download the
update and install that, or should I run the full version install? If I should
run the full version install, should I run it on top of the old version, or
uninstall the old version (which seems like the worst option)
We are running Exchange 2003, but not on the same machine. It would be ideal if we could jsut instal 4.1 on top of it as if it was an upgrade.
The first issue I see is that Unity is currently installed in a test lab environment, which means it is currently linked to AD/Exchange in your lab AD, not your production AD. If you want to use the same server, you will need to uninstall Unity from the lab AD and get it installed into your production AD. Please correct me here if I am misunderstanding. Unity will not work just moving it to production AD ... I am speaking from experience here. I had Unity 3.1(5) running in a lab test and had to uninstall from the lab before connecting the server to our production AD back in 2001. I started fresh, formatted the drives, loaded the OS and installed Unity.
It is linked to our production AD. I misrepresented it by saying test lab. Yes it's a test setup , but we currently are not using it for our businesses' voice or messaging. We use a more traditional phone system right now. I installed Unity on another server in our domain and linke dit to AD and Exchange for testing.
Hi Michael -
I have attached the following links - upgrade information from Unity 3.X to 4.X - http://www.cisco.com/en/US/customer/products/sw/voicesw/ps2237/products_upgrade_guides_chapter09186a0080205a78.html and the 4.1 supported hardware/software guide - http://www.cisco.com/en/US/customer/products/sw/voicesw/ps2237/prod_system_requirements_hardware09186a0080531f2a.html
I believe you can upgrade from 3.1(6) to 4.1 with all of the associated caveats - meaning you have a supported server and OS (Windows 2000 for the voice card driver support). I performed an upgrade from 3.1(5) to 4.0(3), following Cisco's instructions to the letter. We have a CallManager integration so I didn't have to worry about voice cards :-) The move from the demo USB key to 4.1 license file should be fine. If you are doing NIC teaming (not load balancing), you will want to make sure you set the team to the MAC that matches your new 4.1 license. Whether you upgrade or install fresh may be up to the way to want to roll out Unity. This is the time to design your Unity implementation, i.e. will you want unified messaging vs. voice-only, or initially have a voice-only implementation, but want to migrate to UM? How many Exchange servers do you have today, will have in the future? Does your AD/Exchange organization have multiple Exchange administrative groups? Will you have multiple Unity servers that you would want to digitally network together or to communicate with other voice servers in your existing telephony network (i.e. Bridge, AMIS, VPIM networking). Sorry for posing so many questions ... but you are in an excellent position to plan out your design before putting Unity into production. This may be a moot point if you are only planning on one Unity server. 4.X will require additional schema updates to your AD forest. If you are planning on multiple Unity servers and have multiple domains, you might consider moving your Unity accounts (install, directory service, and message store service) to your forest root domain. If this is something you want to consider, there is a 3.x/4.x Uninstall utility available on www.ciscounitytools.com that will uninstall Unity from AD/Exchange and remove all Unity related information. I would advise using this utility if you make the decision to install 4.1 fresh instead of performing an upgrade. If it were me, I would also engage my Cisco SE in working together to establish a plan and have him or her validate it with Cisco. Having this type of relationship will go far as you continue to work with Unity. It's the best application I've ever worked with :-)
Thank you for your insight, Ginger:
I have been using call manager express, which runs on a 2621xm Router, so I don't have a voice card. I am also not doing NIC teaming. We are running one Exchange 2003 Server, which is installed on our only Domain Controller, and I don't foresee us outgrowing this setup, with one Unity server, in the next five years.
I will review the links you sent, and if I have additional questions I will post them here.
I am think that it may be less trouble to just remove Unity and reinstall it, but I have a few questions:
1) We are running Exchange 2003. Will this be an issue?
2) If I use the Unity 3.x uninstall tool you referred me to, if Unity is all that is running on this server, is it advantageous to upgrade it from server 2000 to Server 2003?
1) Exch 2003 has to be off-box w Unity 4.0(3) minimum, and only used when you have Unified messaging. (see Table 2 of http://www.cisco.com/en/US/products/sw/voicesw/ps2237/prod_system_requirements_hardware09186a0080531f2a.html#wp46399)
2) In Voice-messaging only, unity must be on Windows 2000 (it can be member of win 2003 domain). In Unified Messaging, Cisco Unity server is a member server in an existing Windows Server 2003 domain connected to Exchange 2003 server.
Can I install Unity using unified messaging on a Windows 2003 member server in a Windows 2000 domain? Exchange 2003 is running on a seperate Windows 2000 Domain Controller.
I think it is ok.
"The Cisco Unity server is a member server in an existing Windows Server 2003 domain. If the Cisco Unity server is running Windows Server 2003, the message store server that Cisco Unity connects with (the partner Exchange server) must be running Exchange 2003. "
look at this docDifferences in Support and Functionality When Windows Server 2003 Is Installed on a Cisco Unity 4.0(4) Server
for the list of restrictions.
|
OPCFW_CODE
|
a thick provisioned vmdk that is 50% filled with data CAN be converted to a thin provisioned \VMDK
a thick provisioned vmdk that is 100% filled with data CAN NOT be converted to a thin provisioned \VMDK
@continuum Hmm, not sure if it's a disk space issue. There are two VMs I'm having issues with. One is a linux applicance with a very small disk of around 1 MB. I can understand why this one will not covert so it's not really important. Can leave as is.
The other VM is Windows 2012. The disk in question is 175GB with 65GB of available disk space. Would that be enough to cause the disk not to convert to thin provisioned?
How do you know that your 175 gb has only written 65gb ?
And a 1MB vmdk sounds very suspicious - please add details.
By the way - to figure out the provisioning type of a given vmdk there is only one reliable way:
Query the flat.vmdk with the command vmkfstools -p 0 name-flat.vmdk > result.txt
If results.txt has even a single line containing VMFS -Z it is lazy zeroed.
If results.txt has even a single line not containg VMFS it is thin provisioned.
For this test ignore the first line of results.txt - thats the info part.
Every line that follows specifies the allocation of a fragment of the flat.vmdk.
Windows shows 65 GB available for that disk
The linux vm is a Cisco Umbrella virtual appliance. There are two. Each have a small disk of about 1 MB. Only one of the two VMs has this disk as thick instead of thin provisioned. These VMs were deployed before I began working here a few months ago.
Capture.JPG 70.5 K
> Windows shows 65 GB available for that disk
That is nice to know but irrelevant to the question.
A flat.vmdk is used to 100% if each single 1 MB block of has been used once.
If the guestOS cleans up the trashbin later this does not change the state of those one MB blocks.
When ever you write at least one byte to a one MB block of a vmdk this block changes its state from thin or lazy zeroed to eager zeroed.
To change an eager zeroed one MB block back to a thin provisioned block you must use a function that reclaims the space - such as vmkfstools -i cuurent.vmdk thin.vmdk -d thin.
If you really want to understand this - provide an example where you use the vmkfstools -p 0 name-flat.vmdk > result.txt function
I had been away for a few days...
In regards to the "vmkfstools -p 0 name-flat.vmdk > result.txt" function, I've checked and I do not see any *.flat.vmdk files for any of the VMs in question.
There is a kb for this, and unless its a specific setup, you need to do it manually.
To reclaim the unused space of a virtual disk in ESXi/ESX 4.1 or later:
Note: Where vmkfstools supports the -K option (--punchzero), you can reclaim the zeroed blocks of thin-provisioned virtual disks without the need to clone to another VMFS datastore with a different block size.
- Ensure that the disk has no Snapshots.
- In a Windows virtual machine, run the
SDeletecommand (or a tool with similar functionality) to zero out all unused space. The syntax for the SDelete command is SDelete -z driveletter. If you use
SDelete, ensure that you use version 1.6 or later.
Note: Zeroing all unused blocks inflates the disk to its full size as if it was an eagerzeroed disk. Ensure that there is sufficient space on the datastore to allow the disk to grow to its full size. For more information, see Determining if a VMDK is zeroedthick or eagerzeroedthick (1011170).
- Shut down the virtual machine or temporarily remove the virtual disk from the virtual machine to ensure that it is not in use.
- Erase all unused blocks by running the command:
vmkfstools -K /path/to/disk-name.vmdk
|
OPCFW_CODE
|
What is the arrow of time, and why has it baffled physicists for nearly a century?
The arrow of time can be explained rather simply as the observation that we remember the past and not the future. We have access to history books and all other types of records about what's come before us, but no such information from the other direction.
Now, this may seem simplistic, but there's a conundrum here. The laws of physics are symmetric, meaning they work regardless which way you're moving in time. For example, imagine you watched a movie of an egg falling off a table, and shattering on the floor. If you watched that same movie on rewindwith all the cracks and pieces of the broken egg neatly reorganizing themselves, and that reformation energy forcing the egg to leap back onto the tablewell, that also obeys the laws of physics.
So now we have a question. Why is it that everywhere we look, we always see the first scenario and never the second?
More From Popular Mechanics
Do we have any plausible explanations?
There are many different explanations, and most of them revolve around the idea that the arrow of time is basically generated by an increase of entropy. Entropy, very roughly speaking, is a measure of how jumbled and disordered a system is. And entropy is not symmetrical. This is called the second law of thermodynamics: We know that over the long run any large enough system will always increase in entropyit will move from an ordered state to a less ordered state.
Imagine you poured a saltshaker half full with salt, and then topped it off with pepper. It'd look neatly layered at first; but every time you moved or shook it, your salt and pepper would become increasingly mixed and disordered. That's entropy. And because it's a one-way process, many physicists have hypothesized that it somehow dictates the direction the arrow of time is pointing.
But these explanations have two serious issues. The first is that entropy has an upper limityour salt and pepper shaker can only get so randomized, until shaking it doesn't make it any more disordered. Second, to see an increase in entropy (and thus generate this arrow of time) you'd need a special starting configuration where the salt and pepper were organized to begin with. If we look at our own universe, this cries out for an explanationa highly organized initial state is a very, very unlikely random configuration.
You created a model that shows you can actually circumvent these issues by looking at a property called complexity. Can you explain that?
We made a model that's an approximation of the large scale universe, where gravity is the dominating force, and the universe is filled with particles. Keep in mind, it's a simplified approximation. For example, we don't include any of the other forces, or anything like gravitational waves or dark matter.
Now, the reason we didn't need any special starting conditions to generate an arrow of time is complicated, but it's rooted in the fact that gravity, unlike all the other forces, is universally attracting. (While the strong and weak forces and electromagnetism can push or pull different types of particles, gravity only pulls.) This is important. Because while the combination of an attraction and repulsion will inevitably create a sort of chaotic equilibrium, the constant pull of gravity will continually grow a sort of structure, from which we can derive an arrow of time.
What this means from the perspective of our model is that given any random initial smattering of particles, as gravity starts pulling, the universe fragments into clusters that get denser and denser; our model coagulated into these little subsystems. If it helps, you can think of them like globular clusters of stars. hose clustersbecause they developed their own definite rotation, energy, and momentumactually collected information about the rest of the model. They encoded data about what the past structure of the model looked like through their various properties, somewhat analogous to a history book. In other words, they pointed one way in time.
Back up for a second. If we're looking only at gravity, then why didn't your model just collapse upon itself?
That's an interesting point. We know that when you look at the universe as a whole, it's expanding. We've implemented this expansion into our model by saying that the ratio of the largest and smallest distance between particles is constantly increasing.
This was key, because in this expanding system where gravity is dominating, you immediately see something very interesting happening. The complexity of the universe (and we use complexity' as a precise physical quantity to describe how clustered our model is) grows without end. We found that you can create a model where the system's complexity increases unboundedly, regardless what starting position you input.
But what about all the other physical phenomenon that aren't related to gravity? Why do we always see those moving in one way in time?
We're actually working on that right now, and I'll try to simplify our early conclusions. One great example is that if you look at a decaying atom, you always find that it decays into a lighter atom, never a heavier one. That follows an arrow of time, and seemingly has nothing to do with gravity, right? Not exactly. You have to realize, for that atom, something had to put it into a special starting state where it was able to decay.
We have not yet described such an atom. But we do have a model in which the early universe, when gravity was the dominating force, generates very atypical starting states. And as the universe expanded, and gravity ceased being the dominating force for small subsystems like the atom, those starting stakes somehow forced all the other arrows of time to march in step.
So you're telling me it's possible that the early universe had multiple arrows of time, moving in different directions?
Yes, it's possible. We actually call this process hylogenesisthe idea that at some stage in the early universe the different arrows of time were all disordered. But because gravity was the dominating force, it eventually pushed all of them to point in the same direction. Before that point, there was no space-time in the sense in which we currently experience it.
|
OPCFW_CODE
|
To improve the accessibility of our content, please find the audio version of this blog post.
ST is launching today version 2.6 of STM32CubeProgrammer, which inaugurates a new and rapid way to download Sigfox credentials onto an STM32WL microcontroller, among many other things. Overall, the latest release tackles one particular challenge: making the development of modern embedded applications more accessible. Indeed, even seasoned engineers can struggle to take advantage of the latest IPs or functionalities. For instance, the STM32WL is the first MCU to embed a sub-gigahertz transceiver. However, writing a firmware that connects to a LoRa or Sigfox network isn’t always straightforward. Similarly, we continue to offer new security features, like Secure Firmware Install, under our STM32Trust initiative. Yet, implementing them can take time, especially if a team is unfamiliar with them. The new version of STM32CubeProgrammer offers remedies that increase accessibility by lowering complexity.
Unification of Tools and Interfaces
STM32CubeProgrammer 2.6 is highly symbolic because until now, the software tool focused on unifying the user experience. We brought all the features of utilities like the ST Visual Programmer, ST-LINK Utility, DFUs, and others to STM32CubeProgrammer, which became a one-stop-shop for developers working on embedded systems, by replacing all these legacy tools. We also worked on improving our support for all major operating systems. Indeed, version 2.6 is a culmination of that effort since its package embeds OpenJDK8-Liberica. Hence, users will not need to install Java and struggle with compatibility issues before installing STM32CubeProgrammer. Now that this work of integration and unification is complete, we turned our attention to workflow simplification.
STM32CubeProgrammer 2.6: Increasing Accessibility and Efficiency
Sigfox Activation in a Few Clicks
Before a device like the STM32WL can connect to the Sigfox network, developers must deal with credentials, certificates, and device activation, which can be disorienting to new developers. Indeed, every Sigfox device must possess a private key that the system will use during authentication. Developers must also register the device on the Sigfox website before they can send their first message. Ultimately, the back and forth between tools and back-ends can slow down developments. Programmers working on numerous STM32WL will especially appreciate the efficiency of the new system we are introducing with STM32CubeProgrammer 2.6.
The new release has a menu option entitled “Sigfox Credentials” that extracts the certificate embedded into the STM32WL. With a simple click, developers can access this 136-byte string, copy it to their clipboard, or save it in a binary file. We also created a new page on our website at https://my.st.com/sfxp where developers can paste the certificate and immediately download Sigfox credentials in the form of a ZIP file, thus vastly accelerating that part of the process. Engineers can then load the content of that downloaded package to the MCU through STM32CubeProgrammer. Finally, developers get the Sigfox ID and PAC of their device with a simple AT command line before heading to https://buy.sigfox.com/activate/ to enter them in the Sigfox system. The activation will last two years, and developers can send 140 messages per day for free for a year.
Opening SFI on STM32WL and STM32L5 to All Bootloader Interfaces
Readers of the ST Blog know STM32CubeProgrammer as a central piece of the security solutions present in the STM32Cube Ecosystem. The utility comes with Trusted Package Creator, which enables developers to upload an OEM key to a hardware secure module, and to encrypt a firmware using this same key. OEMs then use STM32CubeProgrammer to install the firmware onto the STM32 SFI microcontroller securely, such as STM32WL. Before version 2.6, implementing a secure firmware install on the STM32L5 demanded that OEMs use a USB or UART interface. With STM32CubeProgrammer 2.6, they can now use an I2C and SPI interface, which gives them more flexibility. Additionally, the STM32L5 also supports external secure firmware install (SFIx), meaning that OEMs can flash the encrypted binary on memory modules outside the microcontroller. And with this new version of STM32CubeProgrammer, teams can perform an SFIx via all the boot loader interfaces, rather than a select few.
STM32CubeProgrammer: Improving Simplicity and Practicality
On-the-Fly Register Updates and Flashing Multiple External Memories at Once
Debugging and flashing operations are famously tedious, and STM32CubeProgrammer is bringing two solutions to this challenge. Version 2.6 introduces the ability to dump the entire register map and edit any of them on the fly. Previously, changing a register’s value meant changing the source code, recompiling it, and flashing the firmware. Today, testing new parameters or determining if a value is causing a bug became a lot simpler. Similarly, engineers can now use STM32CubeProgrammer to flash all external memories at once. Previously, flashing the external embedded storage and an SD card demanded that developers launch each process separately. With version 2.6, we are introducing the ability to do it in one step. Developers no longer risk forgetting to do it, and flashing operations become a lot more efficient.
Optimizing the Serial Wire Viewer With a Color Scheme
Another challenge for developers comes from parsing the massive amount of information that passes through STM32CubeProgrammer. Anyone that ever started flashing a firmware knows how difficult it is to track all the logs. Hence, we are introducing custom traces that allow developers to assign a color to a particular “printf” function. It ensures developers can distinguish a specific output from the rest of the log far more rapidly. Debugging thus becomes a lot more straightforward and intuitive. Developers can spend days on their IDE themes to personalize their code visualizer. STM32CubeProgrammer 2.6 offers a supplementary approach to its console.
|
OPCFW_CODE
|
Catalina and spring class loading problem
Hello,
I have no idea if this anything to do with this plugin so feel free to throw this away.
Anyway, been debugging old Spring application that needs to be upgraded from 4 to 5.
Old project has been using this plugin to run the application on Tomcat without problems.
However, after upgrading to Spring 5, I keep geting errors like:
2024-03-07 18:10:31,138 WARN [main] org.springframework.web.context.support.XmlWebApplicationContext: Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter#0' defined in ServletContext resource [/WEB-INF/index-servlet.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter]: Constructor threw exception; nested exception is java.lang.LinkageError: loader constraint violation: when resolving method 'reactor.core.publisher.Mono reactor.core.publisher.Mono.from(org.reactivestreams.Publisher)' the class loader org.apache.catalina.loader.ParallelWebappClassLoader @3c9bfddc of the current class, org/springframework/core/ReactiveAdapterRegistry$ReactorRegistrar, and the class loader 'app' for the method's defining class, reactor/core/publisher/Mono, have different Class objects for the type org/reactivestreams/Publisher used in the signature (org.springframework.core.ReactiveAdapterRegistry$ReactorRegistrar is in unnamed module of loader org.apache.catalina.loader.ParallelWebappClassLoader @3c9bfddc, parent loader 'app'; reactor.core.publisher.Mono is in unnamed module of loader 'app')
2024-03-07 18:10:31,140 ERROR [main] org.springframework.web.servlet.DispatcherServlet: Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter#0' defined in ServletContext resource [/WEB-INF/index-servlet.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter]: Constructor threw exception; nested exception is java.lang.LinkageError: loader constraint violation: when resolving method 'reactor.core.publisher.Mono reactor.core.publisher.Mono.from(org.reactivestreams.Publisher)' the class loader org.apache.catalina.loader.ParallelWebappClassLoader @3c9bfddc of the current class, org/springframework/core/ReactiveAdapterRegistry$ReactorRegistrar, and the class loader 'app' for the method's defining class, reactor/core/publisher/Mono, have different Class objects for the type org/reactivestreams/Publisher used in the signature (org.springframework.core.ReactiveAdapterRegistry$ReactorRegistrar is in unnamed module of loader org.apache.catalina.loader.ParallelWebappClassLoader @3c9bfddc, parent loader 'app'; reactor.core.publisher.Mono is in unnamed module of loader 'app')
and
Illegal reflective access by org.apache.catalina.loader.WebappClassLoaderBase (file:/Users/xxxx/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/com/heroku/webapp-runner/<IP_ADDRESS>/webapp-runner-<IP_ADDRESS>.jar) to field java.io.ObjectStreamClass$Caches.localDescs
i have been trying different minor versions of spring and heroku webapp runner. Also tried looking into dependency tree for reactive-streams conflicts but there seems to be none.
Any advice is welcome as I'm stuck here. Thanks for reading.
Oof, that looks like a fun bit of debugging.
Would it be possible to switch from Tomcat to Jetty just to see if the errors are any different?
Thanks for quick response. It's quite a bit refactoring but I'll give it a try.
Would it be possible to share a version of your project that reproduces this error?
Optimistically closing this as fixed in v5. Please reopen if you encounter it with the new version.
|
GITHUB_ARCHIVE
|
Susanna Attended GLAM-Wiki 2013 in London and presentedthe historical maps project that has grown out of our GLAM cooperation with the Finnish National Archives. It was decided that the project shall be called Wikimaps, and it will be developed along the lines of an earlier proposal by Maarten Dammers. The core team is now working on a proof of concept to present at the Amsterdam Hackathon, and we have been making contacts with development within the OpenStreetMap community. In addition to the Wikiteam we are building a collaboration team in Finland to support the project, including historians, GIS experts, archivists, wikipedians, open source and open content activists, linked data research team and also relevant organizations, who can help realize the project. Other than maps, the London event was a great dive into the GLAM world of Wikimedia. Thank you!
Wiki Loves Public Art
Wikimedia Finland has had its hands full starting up the public art photography contest. The task of compiling a list of public domain works took the most effort, but local jury and prize sponsorship turned out to take some time as well. WLPA has a political aspect to it that we'd like to highlight. The nature of public works of art is that they should be accessible and enjoyed by potentially everyone. While this is the case in the physical world, copyright restrictions severely hamstring their impact online. Simply treating all public works as being in the public domain might be excessive, but we don't claim to have any clear answers to the dilemma of accessibility. We hope to start public discussion in this matter, as the debate on copyright restrictions seems to focus mainly on digital piracy and contracts these days.
Wiki School, GLAM meetups
Kiasma seems to be our go-to partner and primary testing ground for new events. We took part in a short educational session on how to edit Wikipedia. The model was somewhat similar to the one used with Ateneum where a smaller group meets around a table and cooperates on a given topic. My wish is that these meetups would become regular and that they would foster the idea of open knowledge as a rewarding hobby.
I was also present at the monthly meeting of the Finnish Open GLAM group where we shared ideas on how libraries could be of help in promoting open culture. There were some exciting news from the team working on the Helsinki Central Library. Openness starts with taking the wishes of the public into consideration and they've done a good job at it so far.
Our chapter applied for public funding for GLAM-related activities from the Ministry of Education and Culture. They have an application process for projects relating to culture and our overall goals are overlap neatly with theirs. So far, our Chapter's activities have been completely dependent on volunteer efforts by our members and our partner institutions. If the funding materializes, we'll be able to cover the expenses of outreach programs, training, workshops and trips to GLAM events. This would give essential continuity to our work.
|
OPCFW_CODE
|
IMPORTANT: You must supply a User Agent for all interactions.
GET – Read Data
A GET for the above resource will return the XML for subscriber 124’s subscription to syndication 57. This call will only be successful if the authenticated user is publishing syndication number 57, and if subscriber 124 is in fact subscribed to it. If the call is unsuccessful the returned XML will contain an empty <subscription />tag (if the user does not own syndication 57) or a populated<subscription> tag with an empty <subscriber /> tag if subscriber 124 does not subscribe to feed 57.
It is possible to use shorter but complete paths as follows:
|Returns all the user’s published feeds
|Returns the user’s published feed # 57
|Returns all the subscribers to feed 57
|Subscriber 124’s subscription to feed 57
Note that /syndications/subscribers is an invalid resource. To fetch all their subscribers, a user should simply GET the /subscribers resource.
Also note that you should only use the paths documented here. Some resource paths you may expect to work are not valid and not supported, such as /subscribers/124/subscriptions/57 – to get this information, query /syndications/57/subscribers/124 instead. /subscribers/124 is valid, returning all 124’s subscriptions for this list owner.
POST – Edit and Search
If supported by the resource, you can POST edits at any point in the resource path structure. The results will depend on the path used and the data in the POST. Use the returned XML to judge the success or otherwise of your edit; do not rely on the HTTP return code (which will always be 200 OK in this release).
In this example, a POST at each point in the hierarchy behaves as follows:
|Updates all the syndication entities included in the POST data
|Only updates feed #57. Fails if any other feed id is specified.
|Edits the subscribers included in the POST data for feed 57
|Updates subscriber 124’s subscription to 57, if the IDs match.
The results of a POST are typically (but not always – see the documentation) the same as a GET for the specified resource.
POST is also used for searching and sorting result sets. Where supported, replace the ID with the literal “search”
|Search and sort this user’s syndications
|Search the subscribes to feed 57
Not all resources support searching. See the documentation for resource-specific information.
Important: You must POST with the MIME type text/xml or application/xml for your data to be interpreted correctly. You cannot, for example, post with an ordinary HTML <form> in a browser.
DELETE – Resource Removal
DELETEs may be issued to remove the associated resources from the system. Resources are typically not physically removed from FeedBlitz; rather they are marked as deleted. Deleted resources are retrieved, if present, by a GET statement, so be sure not to present them to end users if that would be confusing or inappropriate.
The scope of a DELETE is the same as for a GET; it works at the level specified by the resource path. Use DELETE with care, and ensure that the user has confirmed their desire to make the relevant changes before you call the API. Use the returned XML,not the HTTP return code, to evaluate the success of the requested operation.
|Deletes all published feeds, and all the subscriptions to them.
|Deletes feed # 57 and removes any subscriptions.
|Deletes all the subscribers to feed 57, but leaves the syndication itself alone.
|Deletes subscriber 124’s subscription to feed 57 only.
PUT – Add a New Resource
New resources are added by using a PUT at the relevant level in the path:
|Add a new syndication
|Add a new subscriber to feed 57
A PUT, like a POST, typically returns the full resource XML(as if a GET were performed on the newly created entity). Exceptions are made for privacy reasons in the /user resource, where only the email address of the user (if the user is not anonymous) is returned to the caller along with a status code indicating whether the operation was successful or not.
- A PUT on an existing resource is NOT treated by the FeedBlitz API as being the same as a POST to that resource. A PUT on an existing resource will not edit that resource’s settings by design. You MUST use the POST method to alter an existing resource’s variables.
- There are currently no DTDs or other formal online XML documentation for validating XML documents used by the API.
- Not all methods are supported by all resources. Don’t assume.
Important: You must PUT with the MIME type text/xml or application/xml for your data to be interpreted correctly. You cannot post using an ordinary HTML <form> in a browser.
|
OPCFW_CODE
|
Looking for a more specific Raptor engine model, that follows fuel and oxidiser flow. Anyone know where to find good Raptor resources?
Here is our raptor inspired model in SATSIM, our virtual aerospace lab. Want to be able to model the fuel and oxidiser flow at some stage both in Blender, then WebGLbut it's trickier than it looks. Are there any resources which detail Raptor parts and functions in more detail? Keen to know more. Going to build a more detailed breakdown of each component, with mini tutorials on each component and its function if people are interested in it.
That will be quite hard, as the Raptor is still very much under development and improvement, the detail of the Raptor plumbing and wiring changes on an almost weekly basis (judging from the photos of the raptor on here: https://forum.nasaspaceflight.com/index.php?topic=53555.0 ), not to mention the structure around them as they are installed, like the recently-added aero? shields around the outer ring of Raptors on the superheavy booster. Also which Raptor? The 200-ton test units on STarship? The 230-ton nongimballed ones? Raptor vac? Raptor2?
Agree with @CuteKitty_pleaseStopBArking, what you are looking for is not publicly available. Odd that you'd announce a project to "build a more detailed breakdown of each component, with mini tutorials on each component and its function" before looking into this.
SpaceX designs are their intellectual property, and what gives their company a competitive advantage, they aren't open source.
Appreciate your responses, thanks : )
Of course, I'm not looking for the REAL CAD files or schematics, its just that the full flow closed cycle engine is interesting. I personally want to know more about the theoretical components in this type of system, so it's natural to just do this sort of mock up for me, nothing helps you learn more than thinking about what goes into the assembly of the engine. So it's a Raptor "inspired" model, not the real thing!
As for announcing a project, I wanted to know if anyone was interested in it or not. All the models I see out there are basic illustrations, schematics or photorealistic 3D models, but I can't find anything that's more tutorial based. Realistically, there's no point in doing a project at that scale if no one else cares about it, I'm better off just putting my own animated model together in Blender. No upvotes so perhaps this is the wrong audience, or no one really wants it. It's all good data.
Pardon my ignorance but what is SATSIM?
@Ng Ph S@SIM is a prototype of a virtual aerospace lab I'm building.
If you want to try a demo visit here https://www.satsim.space/webmvp/survey/6.php
Thanks. I haven't tried it yet but think you should include this explaination in your question. Nobody could have guessed this background to your question otherwise.
|
STACK_EXCHANGE
|
Once you have a piece of content, you've baked a cake. Now you can take this cake and slice it up. Serve those slices to different channels to reach new audiences. Some get smaller pieces, some get bigger pieces. Serve a slice of webinar as a YouTube short. Serve a
Did you know that rotating helix thingy outside barber shops is called a barber's pole? It's something you see but you don't see it, right? It's enough to signal you're in the right place but that's as deep as it goes. The technology world is full of barber poles. Jargon,
They educate. Inspire. Help. Inform. Celebrate. Include. Question. Explain. Demo. Answer. Explore. In other words, they build trust. They don't sell... until the very, very end. IF at all. The content itself is the persuasive factor. Here's just one example of a Very Good Webinar.
You know the saying, "The first impression is the last impression." For someone using your product or open source project and learning it the very first time, is it any good? How do you know? Are you measuring it? How do you measure what "good" is? I would start with
Your first-time learning experience (FTLX) is an owned experience you create that teaches users how to use your product. This would be something like: * In-app onboarding * Course * Quickstart tutorial * Sample app * Readme There's another adjacent term, first-time user experience. That is used within the product context. I'm talking about a
It might be nice if people stopped by to chat with your advocates or team at the conference booth. But conferences are busy, noisy, and often create circumstances that uh... aren't great for interaction. I can think of tons of situations where I couldn't easily interact: * Your reps are busy
What's the heck's the difference?! Well, I don't think it's about one-way or two-way interaction – every platform can basically work both ways. Instead, I think each type of event evokes a vibe: * A live event's vibe is electric, there's a hum of excitement in the air * A live stream's vibe
If you give conference talks as part of your DevRel work, there are many ways to structure them. One of the first ones I remember being taught is what I'll call The Three Tees: Tell 'em what you're going to tell 'em Tell 'em Tell 'em what you told 'em
If an open source project is created first and then a community grows around it before a commercial offering is available, I'd call this community-led growth (e.g. Redis, Red Hat). If an open source project is created alongside a commercial product (aka "open source core" or "open source startups"
If you're the alpha, you probably have the resources to hire an army of developer advocates to maintain your dominance. If you're the underdog, it's going to be an uphill battle to take over the pack. Instead, you could leave and find a new pack. Sometimes finding an underserved niche
|
OPCFW_CODE
|
"""
Created on Oct 4, 2017
@author: Alan Williams
"""
from copy import deepcopy
from typing import Optional, Set
import numpy as np
from tconfig.core.data import ParameterSet, DEFAULT_NDARRAY_TYPE
from tconfig.core.algorithms import Generator
from tconfig.core.algorithms.ipo import InteractionElement
# pylint: disable=invalid-name, unsubscriptable-object
class IpoGenerator(Generator):
def __init__(
self,
parameter_set: ParameterSet,
coverage_degree: int = 2,
*,
dtype=DEFAULT_NDARRAY_TYPE,
existing_configs: Optional[np.ndarray] = None
):
super().__init__(
parameter_set,
coverage_degree,
dtype=dtype,
existing_configs=existing_configs,
)
self.test_set = None if existing_configs is None else deepcopy(existing_configs)
def add_new_test(
self, t_prime: np.ndarray, pkw: InteractionElement, piu: InteractionElement
) -> np.ndarray:
"""
Create a new test
"""
config_width = 1 + max(max(pkw.keys()), max(piu.keys()))
new_config = np.zeros((1, config_width), dtype=self.ndarray_type)
for index in pkw:
new_config[0, index] = pkw[index]
for index in piu:
new_config[0, index] = piu[index]
if t_prime is None:
return new_config
return np.vstack([t_prime, new_config])
@staticmethod
def is_zero(test, pkw: InteractionElement) -> bool:
"""
Returns True if all values in 'test' are either 0 or the same as the
test index, and False otherwise.
"""
return all(test[index] in [0, value] for index, value in pkw.items())
def contains_dash_in_t(
self,
t_prime: Optional[np.ndarray],
pkw: InteractionElement,
piu: InteractionElement,
) -> int:
"""
Looks for 0 values in a test set that can be filled in.
"""
pi = list(piu)[0]
u = piu[pi]
for test_num, test in enumerate(self.test_set):
if self.is_zero(test, pkw) and test[pi] in [0, u]:
return test_num
if t_prime is not None:
for test_num, test in enumerate(t_prime):
if self.is_zero(test, pkw) and test[pi] == u:
return test_num + len(self.test_set)
return -1
def first_parameters(self):
strength = self.coverage_degree
value = [1] * strength
initial_num_rows = 1
for index in range(0, strength):
initial_num_rows *= self.num_values_per_parm[index]
initial_num_columns = strength
self.test_set = np.empty(
(initial_num_rows, initial_num_columns), dtype=self.ndarray_type
)
for test_num in range(0, initial_num_rows):
for col in range(0, strength):
self.test_set[test_num][col] = value[col]
if test_num < initial_num_rows - 1:
incr_col = strength - 1
value[incr_col] += 1
while value[incr_col] > self.num_values_per_parm[incr_col]:
value[incr_col] = 1
incr_col -= 1
value[incr_col] += 1
def get_hori_recur(
self,
new_parameter_index: int,
cover: int,
pi: Set[InteractionElement],
test_value_list: InteractionElement,
) -> Set[InteractionElement]:
for i in range(0, new_parameter_index):
if i in test_value_list:
continue
for value in range(1, 1 + self.num_values_per_parm[i]):
ie = InteractionElement(test_value_list)
ie[i] = value
if cover == 1 and ie not in pi:
pi.add(ie)
if cover != 1:
self.get_hori_recur(new_parameter_index, cover - 1, pi, ie)
return pi
def get_test_value_recur(
self,
orig_set: Set[InteractionElement],
config_num: int,
parm_num: int,
cover: int,
result,
ie: InteractionElement,
):
for i in range(0, parm_num):
same_flag = i in ie.keys()
if same_flag:
continue
new_ie = InteractionElement(ie)
new_ie[i] = self.test_set[config_num][i]
if cover == 1:
if new_ie in orig_set and new_ie not in result:
result.add(new_ie)
else:
self.get_test_value_recur(
orig_set, config_num, parm_num, cover - 1, result, new_ie
)
def pairs_covered_in(
self,
orig_set: Set[InteractionElement],
config_num: int,
parm_num: int,
new_value: int,
) -> Set[InteractionElement]:
result: Set[InteractionElement] = set()
for parm_index in range(0, parm_num):
ie = InteractionElement(
{parm_num: new_value, parm_index: self.test_set[config_num][parm_index]}
)
if self.coverage_degree > 2:
self.get_test_value_recur(
orig_set, config_num, parm_num, self.coverage_degree - 2, result, ie
)
else:
if ie in orig_set:
result.add(ie)
return result
def do_horizontal_growth(self, new_parameter_index: int) -> Set[InteractionElement]:
"""
Add one more parameter, and determine which values to fill in
to the existing configuration set that cover the most interaction
elements for the new parameter.
"""
self.test_set = np.insert(self.test_set, self.test_set.shape[1], 0, axis=1)
pi = set()
num_new_values = self.num_values_per_parm[new_parameter_index]
for nv_index in range(1, num_new_values + 1):
for parm_index in range(0, new_parameter_index):
for value_index in range(1, self.num_values_per_parm[parm_index] + 1):
ie = InteractionElement(
{new_parameter_index: nv_index, parm_index: value_index}
)
if self.coverage_degree > 2:
pi = self.get_hori_recur(
new_parameter_index, self.coverage_degree - 2, pi, ie
)
else:
pi.add(ie)
s = min(num_new_values, len(self.test_set))
for j in range(0, s):
self.test_set[j][new_parameter_index] = j + 1
covered = self.pairs_covered_in(pi, j, new_parameter_index, j + 1)
pi = pi - covered
if s == len(self.test_set):
return pi
for j in range(s, len(self.test_set)):
pi_prime = set()
v_prime = 0
for nv_index in range(1, num_new_values + 1):
pi_double_prime = self.pairs_covered_in(
pi, j, new_parameter_index, nv_index
)
if len(pi_double_prime) > len(pi_prime):
pi_prime = pi_double_prime
v_prime = nv_index
if v_prime != 0:
self.test_set[j][new_parameter_index] = v_prime
pi = pi - pi_prime
return pi
def do_vertical_growth(
self, pi: Set[InteractionElement], new_parameter_index: int
) -> np.ndarray:
"""
Determine a set of additional test configurations that are needed
to completely cover the set of interaction elements required for
the new parameter.
"""
t_prime: Optional[np.ndarray] = None
for ie in pi:
piu = InteractionElement({new_parameter_index: ie[new_parameter_index]})
pkw = InteractionElement(ie.data)
del pkw[new_parameter_index]
tau = self.contains_dash_in_t(t_prime, pkw, piu)
if tau >= 0:
if tau >= len(self.test_set):
for i in pkw:
t_prime[tau - len(self.test_set)][i] = pkw[
i
] # pylint: disable=unsubscriptable-object
else:
for i in pkw:
self.test_set[tau][i] = pkw[i]
if self.test_set[tau][new_parameter_index] == 0:
self.test_set[tau][new_parameter_index] = piu[
new_parameter_index
]
else:
t_prime = self.add_new_test(t_prime, pkw, piu)
return t_prime
def generate_covering_array(self) -> np.ndarray:
"""
Generate a set of configurations for the ParameterSet and coverage degree.
"""
if self.test_set is None or len(self.test_set[0]) < self.coverage_degree:
self.first_parameters()
if self.num_parms > self.coverage_degree:
for parm_index in range(self.coverage_degree, self.num_parms):
missing_interactions = self.do_horizontal_growth(parm_index)
new_rows = self.do_vertical_growth(missing_interactions, parm_index)
if new_rows is not None:
self.test_set = (
new_rows
if self.test_set is None
else np.vstack([self.test_set, new_rows])
)
return self.test_set
|
STACK_EDU
|
So im narrowing down board designs and wiring. Here are some of my observations so far.
Linear Voltage Regulator - This basically takes the 9v and reduces it to 5v. The excess power is given off as HEAT. This appears to be how most other PB boards operate and the main reason I suspect is price and space.. LVR's are SMALL and simple to use.Why not use the LVR included on the Arduino Nano? The nano has a LVR that can sustain 500mA, the F1 5way takes 200mA, and the sear tripper takes 1A (peak, not sustained). That means that I believe that we would overload it VERY quickly and either burn it up or just overheat it and it would turn off. If battery life is a serious issue I will look into a Switching Voltage Regulator as the new ones from MAXIM electronics look nice.
As well, this is why I could never get the Universal T Board to work on my Racegun.. I took a look at the schematics and the LVR on it couldn't sustain anywhere near that much.. I would be curious to know if anyone got a univ-t-board to work in a Racegun. As well if anyone has gotten this to work, it would allow me to know if I could go to a smaller size transistor since the power draw isn't needed.
Linear Voltage Regulator
Part Number - LM2936MP-5.0
DataSheet - http://www.national.com/ds/LM/LM2936.pdf
Size - SOT223
NPN Transistors - These are what takes the low voltage signal from the microprocessor (arduino) and triggers a switch to turn on the higher current solenoids. I chose transistors with integrated diodes for simplicity, and that can handle 1A sustained.. This is overkill for the F1 5way and I may change it down the road for space savings. For now it will be the same so I don't have to organize so many different parts.
Part Number - BST50 by NXP SEMICONDUCTORS
DataSheet - http://www.nxp.com/documents/data_sheet/BST50_51_52.pdf
Size - SOT89
Connectors - Argh.. If it were up to me I think I would just hard solder the connectors because I dont know how many people I have helped with microscopic bits of broken plastic.. That being said, not everyone can solder, so I guess connectors are handy.. Just be careful when removing those things! I decided to go with straight headers (current race board is right angled) due to my space needs, but if it doesn't work out right angled connectors might come back into play.
2 Pole Connectors (Noids, Batt)
4 Pole Connectors (Eyes)
6 Pole Connectors (Button Pad)
I placed a order from Digikey (Try mouser, they are GREAT as well) for the parts and I hope to get them, and the soldering station in soon.. Then the fun will start, since Ive never soldered SMT components in my life, or custom made my own PCB! Oh well, should be fun all the same!
Now that I'm using CAD software (NRO I'm sure can relate) its making it a lot easier to make changes if needed.. EAGLE CAM is awesome if you ever need to set something like this up, and there is even a free version so you can try it out.
|
OPCFW_CODE
|
LabVIEW Job Cost Overview
Typical total cost of oDesk LabVIEW projects based on completed and fixed-price jobs.
oDesk LabVIEW Jobs Completed Quarterly
On average, 0 LabVIEW projects are completed every quarter on oDesk.
Time to Complete oDesk LabVIEW Jobs
Time needed to complete a LabVIEW project on oDesk.
Average LabVIEW Freelancer Feedback Score
LabVIEW oDesk freelancers typically receive a client rating of 4.98.
8 years of experience in LabVIEW, LabVIEW RT, LabVIEW FPGA, MATLAB (Simulink, Simulink Control Design, Control System Toolbox, Curve Fitting Toolbox, Optimization Toolbox, Symbolic Math Toolbox, System Identification Toolbox). LabVIEW CLA titled. 7 years of product development experience with NI hardware (sb-RIO, c-RIO, DAQ, SCXI, PXI).
I am a software developer specializing in algorithm research and development. The programming environment I am very comfortable using are Python, Matlab, and Labview. I am a professional Labview developer with 18 months of industrial experience. I am also a python developer working mostly with scientific python. My github account: https://github.com/neotheicebird I appreciate works that challenge my reasoning ability, and which helps me explore new boundaries.
Over the last 7 years I have developed a wide range of automated tests in Labview. I have very good logical and architect skills. I also have experience in maintenance and development of big Labview programs and generation of source distributions kits, installers and executable programs. I also have some basics knowledge in microcontrollers (ATMEL), assembler language and electronics. I have programmed a lot of different devices (generators, power supplies, oscilloscopes, temp chambers, switch units, few data acquisition devices and special circuits) using different interfaces (GPIB, RS-232, TCP/IP, USB). I am seeking opportunities to develop small or big Labview programs (also some hardware devices needed for automation process). Starting February 2014 I am CLD certified.
Certified LabVIEW Architect with 12+ Years of experience in developing custom test automation services using LabVIEW. Well versed with NI product and also various meters/sources/equipments from agilent, NHR,California Instrument etc etc.
Background: Electrical engineer with experience in all aspects of product development from concept through production, including hardware design, software development, & system integration. Senior development & test engineer for major defense & telecom companies. Specialties: • Custom software and hardware for measurements and test automation • Software: Labview, MS Excel w/ VBA, C (libraries and console applications only) • Manufacturing process troubleshooting & root cause analysis
Web developer with 5+ years of professional experience, successfully freelancing for over two years. I have worked mostly with Django (Python), WordPress, and Magento (PHP) platforms for the web. Besides backend, which is my primary focus, I'm also adept at frontend tasks. When it comes to desktop apps, C# is my language of choice for Windows and Java for cross-platform apps. I like LabVIEW a lot but unfortunately it'a applicability is often limited. In all of my projects I put code quality very high on my list of priorities, as well as following standards and best practices. "Readability counts"
Thanks for taking an interest in me, let me help you get to know me better! I am an avid traveler that loves taking on new challenges and trying new things. Throughout my life I have dived into a variety of different working environments that have allowed me to experience all sorts of perspectives of the world. Excellence in the completion of my tasks is a lifestyle and I am constantly aware of the balance between quality and brevity. As an engineer, I have worked on many different projects during my tenure as a technical professional at Halliburton Sperry Drilling. In this role my tasks varied from hardware design of data collection systems from the ground up, to system and UI programming for test fixtures, to maintenance and upgrades of existing hardware and software systems. I have experience in PCB design of intrinsically safe systems as well as higher level system design. Microcontrollers are a specific passion of mine and I have specifically worked with Arduino, Raspberry Pi, and even constructed a custom board using an ATMega for a wireless data collection system. My programming background covers several different languages such as C, C++, Java, Python, HTML, CSS, PHP, MySQL, LabVIEW, and with a strong grasp of the logic behind the code I am very comfortable with any language that I may not have used as extensively in the past. User Interface design was a large part of my role at Halliburton and I am experienced in make an interface that is both intuitive and functional for users. As a writer, I have always strived to convey relevant information in both an efficient and eloquent manor. My travels have helped me discover new methods of evoking the readers emotions and I always strive to improve my writing skills! I am more than able to write technical documents, creative articles, how-to's, and anything in between! I hope I have given you a fantastic impression of my skills and if you have any questions for me don't be afraid to ask! I am a hard working friendly guy that wants to give you the best that I can create for any project. Thanks for your consideration and I look forward to assisting you with all of your needs!
I am an applied mathematics researcher seeking part-time freelance programming work. My interests include: Machine learning Linguistics. Particularly natural language and syntactic analysis Financial data analysis Any interesting application of mathematical techniques in a practical setting. My background in mathematics makes me well suited for any task which requires learning and applying complex ideas and techniques; my years of programming experience give me a constantly expanding toolkit and the ability to quickly learn new tools as the need arises; and my experience in an academic environment prepares me to effectively communicate difficult concepts to individuals with any level of experience in the field. If my skills and attitude make me a good fit for your project but I do not have some prerequisite skill listed on my profile, feel free to contact me anyway. I am always willing to pick up new skills and can often do so quickly enough that you will not notice a learning curve.
|
OPCFW_CODE
|
I was wondering if it was possible to edit the vbs so that it closed after a short timeout(as it does now) if there were no errors, but either writing an error log to a file on the server or staying open with a message if one/all of the checks fail.
I have very little idea of what to change as I have only just started looking into vb scripting but i'm sure that some of the clever people on here would be able to knock it up in a few minutes.
Just thought it may help for when we get a message saying little johnny couldnt access his work/shared area but no-one actually knows which bit or why.
'check for unplugged network cord during logon
'checks for mapped drives
'merge registry key to run
'| Windows Registry Editor Version 5.00
'| [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Curr entVersion\Run]
'copy this vbs file to c:\windows directory
Option Explicit ' Added By SM
Dim WshShell, WshNetwork, ObjFSO
Set objNetwork = CreateObject("WScript.Network")
Set WshShell = CreateObject("WScript.Shell")
Set WshNetwork = CreateObject("WScript.Network")
Set ObjFSO = CreateObject("Scripting.FileSystemObject")
' Wait until the user is really logged in... Added by SM
While StrUserName = ""
WScript.Sleep 100 ' 1/10 th of a second
StrUserName = WSHNetwork.UserName ' Get the user name
if strUserName = "Administrator" then
'Ignore the admins ;-) Added By SM
'Check if the home drive is there... Added By SM
if ObjFSO.DriveExists("W:") then
'Yes it is bail out... Added By SM
'No its not, has the network cable been pulled... Added By SM
intReturn = WshShell.Popup("Please ensure the network Cable is plugged in or the Wireless Button is on.", 8, "Login Error", 0)
If intReturn = 1 Then ' Trap the button click... Added By SM
'Wscript.Echo "You clicked the ok button. This would log you off"
Else ' The popup timedout log the user off... Added By SM
'Wscript.Echo "The popup timed out. This would log you off after a timeout"
I had this issue a while back (the network plug being unplugged etc), kids were doing it after login to stop us remoting in etc. I found that the .Net framework as an event which is triggered when the network availability has changed:
System.Net.NetworkInformation.NetworkChange.NetworkAvailabilityChanged += new System.Net.NetworkInformation.NetworkAvailabilityChangedEventHandler(NetworkChange_NetworkAvailabilityChanged);
void NetworkChange_NetworkAvailabilityChanged(object sender, System.Net.NetworkInformation.NetworkAvailabilityEventArgs e)
// We're back on...
// We've been unplugged... logout...
I'll post a full C# example later which will sit in the background and watch if wanted?
|
OPCFW_CODE
|
Whether you like Apple's products, and whatever your opinion of Apple CEO Steve Jobs, one thing is indisputable: Apple's knack for marketing consumer products is something most companies would kill to obtain. Think about it for a second: What other consumer (or, for that matter, business) product was launched with such fawning media coverage as Apple's new iPhone?
With all the hype surrounding the iPhone, I thought I'd stick my oar in the water and talk about what the iPhone means for Exchange Server administrators.
Over the last couple of years, I've chronicled my experiences with various Windows Mobile devices, including the Palm Treo 700w. Despite its flaws (such as not having nearly enough RAM), I've come to depend on the Treo to help me stay organized and in touch when I'm traveling or otherwise out of the office. Judging by the number of Windows Mobile devices I see at airports, hotels, and other places where business travelers congregate, I'm not alone.
Now, along comes the iPhone, an incredibly attractive device with a beautiful, fluid, smooth UI that makes Windows Mobile, Palm OS, and Symbian OS look clunky and antiquated by comparison. It has a ton of nifty consumer-level features; it includes full iPod functionality, a very capable Web browser, and full WiFi connectivity. On the other hand, it's missing some key features that business users have come to expect and demand:
- The iPhone doesn't have a physical keyboard. This is a deal-killer for many BlackBerry and Windows Mobile users. Apple's keyboard software is supposed to do a good job of making the onscreen keyboard usable, but I haven't used it enough to form my own opinion.
- The iPhone lacks several data types that are broadly supported on other devices. For example, the device has no task functionality, and you can't export notes from the iPhone to your desktop computer.
- There's no supported way for third-party developers to write applications that run natively on the iPhone, although Web-based applications work.
You might already have heard that the iPhone doesn't natively support over-the-air synchronization with an Exchange server. It supports IMAP; there's an "Exchange" account type on the iPhone that, when selected during setup, tells you that IMAP must be enabled on your Exchange server. The Safari Web browser properly checks certificates, but the Mail application doesn't. This circumstance eliminates one major annoyance of deploying Windows Mobile devices: the hassle of getting self-signed certificates onto a new phone. Of course, it also means that the Mail application can be fooled by a man-in-the-middle Secure Sockets Layer (SSL) attack, but that's a topic for another column.
Rumors have been circulating for a while that Apple has licensed the Exchange ActiveSync (EAS) protocol from Microsoft. I spoke with members of Microsoft's PR team, and they pointed out two salient facts. First, Microsoft doesn't comment on rumors (big surprise there). Second, Microsoft has licensed EAS to a number of other device vendors that compete with Windows Mobile. Because third parties can't add their own software to the device, any EAS support will have to come from Apple, and they're notoriously tight-lipped about product plans. For now, you're stuck using IMAP. It will be interesting to see what Apple's software upgrade path looks like for the iPhone, and how it compares to the way software updates work in the Windows Mobile, Palm OS, and Symbian worlds.
|
OPCFW_CODE
|
schema: change multi origin to origins (should we)?
the current syntax:
<ts:attribute-type id="venue" syntax="<IP_ADDRESS>.4.1.14<IP_ADDRESS>.15">
<ts:name xml:lang="en">Venue</ts:name>
<ts:name xml:lang="zh">场馆</ts:name>
<ts:name xml:lang="es">Lugar</ts:name>
<ts:name xml:lang="ru">место встречи</ts:name>
<ts:origin bitmask="0000000000000000000000000000000000FF0000000000000000000000000000" as="mapping">
<ts:mapping>
<ts:option key="1">
<ts:value xml:lang="ru">Стадион Калининград</ts:value>
<ts:value xml:lang="en">Craig Wright's House</ts:value>
<ts:value xml:lang="zh">加里宁格勒体育场</ts:value>
<ts:value xml:lang="es">Estadio de Kaliningrado</ts:value>
</ts:option>
</ts:mapping>
</ts:origin>
<ts:origin contract="erc875" as="mapping">
<ts:function name="isExpired">
<ts:inputs>
<ts:uint256 ref="tokenID"/>
</ts:inputs>
</ts:function>
<ts:mapping>
<ts:option key="1">
<ts:value xml:lang="ru">Стадион Калининград</ts:value>
<ts:value xml:lang="en">Craig Wright's House</ts:value>
<ts:value xml:lang="zh">加里宁格勒体育场</ts:value>
<ts:value xml:lang="es">Estadio de Kaliningrado</ts:value>
</ts:option>
</ts:mapping>
</ts:origin>
</ts:attribute-type>
the new syntax:
<ts:attribute-type id="venue" syntax="<IP_ADDRESS>.4.1.14<IP_ADDRESS>.15">
<ts:name xml:lang="en">Venue</ts:name>
<ts:name xml:lang="zh">场馆</ts:name>
<ts:name xml:lang="es">Lugar</ts:name>
<ts:name xml:lang="ru">место встречи</ts:name>
<ts:origins>
<ts:token-id bitmask="0000000000000000000000000000000000FF0000000000000000000000000000" as="mapping">
</ts:token-id>
<ts:ethereum contract="erc875" as="mapping">
<ts:function name="isExpired">
<ts:inputs>
<ts:uint256 ref="tokenID"/>
</ts:inputs>
</ts:function>
</ts:ethereum>
</ts:origins>
<ts:mapping>
<ts:option key="1">
<ts:value xml:lang="ru">Стадион Калининград</ts:value>
<ts:value xml:lang="en">Craig Wright's House</ts:value>
<ts:value xml:lang="zh">加里宁格勒体育场</ts:value>
<ts:value xml:lang="es">Estadio de Kaliningrado</ts:value>
</ts:option>
</ts:mapping>
</ts:attribute-type>
Rationale 𝑓𝑜𝑟 such a change:
It seems to be reasonable for mapping to be per-attribute-type instead of per-origin. Each origin can, however, specify whether or not to use the mapping. Should the need to use more than 1 mappings for an attribute rises, the mapping can be given an id and referenced separately.
We haven't defined the syntax for attestation origin or dependency origin (one token depends on another token, and its attribute is sourced from that token). The complexity of them might demand its own element tag. Use one element tag for one origin to start in a clear way.
Rational 𝑎𝑔𝑎𝑖𝑛𝑠𝑡 such change:
Couldn't think of any yet.
Consider this together with #114
The new syntax with <origins> sounds much better.
A. Have you considered how children in<origins> are to be processed? Are they ordered by precedence or is it going to be more sophisticated? <origins> makes it easier to expand beyond precedence by ordering similar to rationale (2).
B. Stronger "typing". There's less room for error in parsing the XML as we make changes to the schema.
C. I feel like semantically <mapping> should still be part of each <origins> child as they are implementation details of each origin, i.e:
<ts:origins>
<ts:token-id>
<ts:mapping/>
</ts:token-id>
<ts:ethereum contract>
<ts:mapping/>
</ts:ethereum contract>
</ts:origins>
But this will force developers to duplicate mappings and introduce more room for error. So leaving them as direct children of <attribute-type> is still better.
A. Have you considered how children in<origins> are to be processed? Are they ordered by precedence or is it going to be more sophisticated? <origins> makes it easier to expand beyond precedence by ordering similar to rationale (2).
Thanks. Naïvely, I think that by now it should be ordered by precedence. But it will grow in complexity.
|
GITHUB_ARCHIVE
|
Ixia enhanced its IxOS common test platform, the underlaying operating system, to enable users to run the chassis management interface on an IPv6 or dual-stack
The Communications Regulatory Authority (CRA) has collaborated with Carnegie Mellon University in Qatar (CMU-Q) and Qatar University (QU) to issue a handbook of guidelines and basics to implement an IPv6/IPv4 dual-stack
. This is part of CRA's initiative to support the IPv6 Taskforce in Qatar towards achieving a complete transition to IPv6, the CRA said in a statement yesterday.
Tribune News Network Doha THE Communications Regulatory Authority (CRA) has collaborated with Carnegie Mellon University in Qatar (CMU-Q) and Qatar University (QU) to issue a handbook of guidelines and basics to implement an IPv6/IPv4 dual-stack
. This is part of CRA's initiative to support the IPv6 taskforce in Qatar towards achieving a complete transition to IPv6.
The generic definition of a 6PE is a dual-stack
IPv4 and IPv6-enabled router, with at least an IPv4 legitimate and routed address in the MPLS cloud and identified as a Forwarding Equivalence Class (FEC) with a correspondingly allocated and distributed label binding to the rest of the network.6PE is typically deployed by ISPs that have MPLS core network and (possible) supports MPLS VPN (or other) services .
NTT Com's native dual-stack
IPv6 network with 100G connectivity, programmability & automation will serveas a strong foundation for delivering new and enhanced services to enable the Internet of Everything.
Akamai says that 35 federal agencies that operate 1,200 individual Websites are using its dual-stack
IPv6 and IPv4 platform to meet the IPv6 mandate.
There are three different methods used to achieve this goal across broadband access networks including tunneling (dual-stack
lite and IPv6 rapid deployment) and dual-stack
The most widely supported form of translation is the use of a dual-stack
application-layer proxy, e.g.
Adding this LTE functionality extends the capabilities of the tester's Smart Studio graphical network simulation environment to include Call Session Control Function (CSCF), Dynamic Host Configuration Protocol (DHCP), and Domain Name System (DNS) servers, using either IPv4, IPv6, or dual-stack
Around the globe, Internode (Australia) supports IPv6 services with a dual-stack
network, and ISPs in Hong Kong as well as KDDI, XS4ALL, and Free Telecom are all offering IPv6 connectivity."
Participants will have the opportunity to work on lab exercises related to deploying IPv6 in dual-stack
operational IXP/ISP environments as well as native IPv6 transit.
|
OPCFW_CODE
|
Tools for RIM based software development
Tools for RIM based software development are software toolkits that can be used by software developers to support/implement HL7's RIM-based standards.
- Most tools within HL7 cater to 'standards creators', i.e. tools that support the standards development process. See e.g. HL7 Tooling FAQs and HL7.org toolkit page.
- The term "implementation" is commonly/historically used in HL7 to identify "the process of creating implementation guides" (the definition of a contextualized profile for a universal HL7 standard), whereas software developers would understand "implementation" to mean "software implementation / software development". For that reason this page avoids the term altogether.
The list below contains HL7-sponsored or HL7-developed tools, as well as open source, public domain, or commercial tools. By default all listed tools are open source and public domain. Commercial tools are (and: SHALL be) explicitely identified as such.
Tool evaluation process
Aims of the process:
- Make it easier to implement HL7's RIM-based standards by
- evaluating candidate tools or toolkits, and publish the results. The evaluation process will be open and based on objective criteria (see below for a list). Note: no endorsement, no certification, but evalution.
- making these evaluations known to software developers (see Tooling Communication Plan), thereby allowing them to select any tools that may fit their particular context and programming platform. In conjunction with these evaluation results (a) information should be made available that is minimalistically sufficient to enable an implementer to understand the main charcteristics and goals of the tool, and (b) identifies where additional information can be found and where the tool can be downloaded.
The evaluation process itself consists of the following:
- all tools can be nominated by any party to be evaluated
- as part of the evaluation it is determined what tool category (as identified below) the tool belongs to - each tool category may have category specific evaluation criteria (e.g. code generators may have certain desirable characteristics, whereas testing tools have different characteristics)
- a tool may not fit in any existing tool category because it's either 'out of scope' for this evaluation process, or because a new tool category needs to be added to the list.
- new evaluation criteria (whether general or tool category specific), and new tool categories, may be proposed by any party
- the evaluation proces itself will be the responsibility of N? RIMBAA?/Tooling? members elected by the WG. Those responsible may sollicit help from those more knowledgable about certain tools during the evaluation process.
- all tools are re-evaluated after 1? year, or after a major release of the tool.
Implementation tools that RIMBAA has identified include the ones below, sorted by the type of audience/task they support:
Tools that directly suport the software development process:
|Lvl||Tool Category||Description||Examples (todo: links!)||v3 msg||v3 CDA||RIMBAA||FHIR|
|1||MIF Parser||MIF consumption, API to use MIF (all MIF packages)||x||x||x|
|1||HL7 processable artefact definition based code generator||Includes serialization (which is independent of ITS)|
|2||MIF based class/code generators||Includes serialization (which is independent of ITS)||MDHT, Everest, JavaSIG||x||x||x|
|2||XSD based class/code generators||Includes serialization (which is independent of ITS)||JaxB||x||x||x||x|
|1||MIF based UI generators||Constrained information model to UI element||x||x|
|1||Datatypes library||Classes, operations, may include serialization|
|2||ISO Datatypes library||Classes, operations, may include serialization||Everest||x||x||x|
|2||FHIR Datatypes library||Classes, operations, may include serialization||x||x|
|1||CTS products||Apelon, LexGRID, Healthlanguagecommercial||x||x||x||x|
|1||Mapping tools||Open Mapping Software||x||x||x||x|
|1||RIM Based Persistence Layer||inclusive of OO-API, as a base platform for application development. Subtype: ORM layer to abstract the data types (e.g. an enhanced Hibernate)||JavaSIG, MGRIDcommercial, Oracle HTBcommercial||x||x|
|1||MIF to model transformation||to any other model, includes a MIF Parser||x||x||x||x|
|2||MIF to UML transformation||x||x||x||?|
|2||MIF based schema and schematron generator||SchemaGenerator||x||x|
|2||MIF based database schema generator||for RIM, datatypes (and versions thereof), or R-MIMs||MGRIDcommercial?||x||x|
Tools that support software developers:
|Set of testcases/examples||to test ones code. Or test framework such as IHE Gazelle.|
|Model Based Testing tools||Instance Editor, MDHT|
|An example of a software implementation||testing, playing around|
|MIF visualization tool||show condensed view|
Tools that are used by analysts/providers:
|Model Driven application generation tools||(static/dynamic/UIs/..)||PHI Technology|
|Model Driven documentation generators||PHI Technology|
|MIF based UI designers||binding UI wireframe to model elements|
Tool Evaluation Criteria
The following list of general criteria (independent of to which tool category the tool belongs to) should aid HL7/RIMBAA in evaluating tools:
|Offers support||Up to date documentation, Availability of skilled resources and services|
|Continued development||Follow developments within HL7 (with a reasonable time lag)|
|Proven usage||Used at least 1, or x, sites|
|Reusable||open architecture, developed for re-use as a component in a different "stack", not be tied into one particular solution stack|
|Unambiguous license||No prohibitive licensing small-print, For open products: use should not effectively require the purchase of non-open parts. No hidden "Widget-frosting", should be known ahead of time.|
|No cost barrier||Low cost preferred over high cost|
|
OPCFW_CODE
|
Spatiotemporal Relationship Aided Adaptive Collaboration for Resource Constrained Swarms
Lecture & Organization
Department of ECE, Carnegie Mellon University, USA
Date & Time
Sat 6/6/2020 11:00-12:00am
Zoom ID: 574 459 9066
Dr. Xinlei Chen is a postdoctoral research associate in Department of Electrical and Computer Engineering at Carnegie Mellon University. He received his Bachelor's and Master’s degree in Electronic Engineering from Tsinghua University and his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University. His research interests include large scale cyber physical systems, Artificial Intelligence of Things (AIoT), swarm intelligence, and ubiquitous computing. He has published in both top-tier ACM conferences (eg. SenSys, UbiComp) and high-impact ACM/IEEE journals (eg. IEEE TMC, IEEE JSAC). He has been the recipient of awards including Best Demo Award in ACM SenSys 2016, Best Poster Runner Up Award in ACM SenSys 2016, and Best Poster Award in IEEE/ACM IPSN 2017. His research and demo systems were reported by many well-known media including NBC News, PC World and etc.
Swarm intelligence, as a game-changing technology, has been regarded as an essential component of artificial intelligence, which has broad prospects for both military and civilian applications. For example, it can be applied on unmanned micro aerial vehicles (MAVs) to perform 4D (dull, dirty dangerous and deep) tasks, such as investigation, detection, projection and etc. In addition, swarm intelligence can also be adopted for urban surveillance, radiation monitoring, search & rescue, etc. Due to constraints from real application scenarios, all/most nodes in the swarm have access to limited resources in terms of equipped hardware and acquired data. Therefore, it is challenging for the swarm to achieve satisfying systematic performance with limited resources in real application scenarios, which are hostile, dynamic and complicated. My research focuses on optimizing systematic performances of status estimation, environment inferring and task planning for resource constrained swarms. The spatiotemporal relationships among individual nodes and the acquired data are extracted to improve the entire swarm performance, which is also aided by adaptive actuation strategies based on sensing and learning results. Supported by funding from NSF, DARPA and companies (Intel, Nokia, etc.), my works have been implemented and evaluated on a wide range of intelligent swarm systems: 1) indoor navigation and deployment with MAV swarms; 2) city scale fine grained air pollution inferring with vehicular sensing platforms; 3) city-scale route actuation on ride sharing vehicular mobile crowdsensing platform.
|
OPCFW_CODE
|
What is PDFBox?
PDFBox Library comes as a JAR file. It allows the creation of new PDF documents, manipulation of existing documents, bookmarking PDF and the ability to extract content from PDF documents. We can also use it to digitally sign, print and validate files against the PDF/A-1b standard.
Is PDFBox free for commercial use?
Permission is hereby granted, free of charge, to any person obtaining a copy of this documentation file, to create their own derivative works from the content of this document to use, copy, publish, distribute, sublicense, and/or sell the derivative works, and to permit others to do the same, provided that the derived …
How do I create a PDF with PDFBox?
PDFBox – Creating a PDF Document
- Step 1: Creating an Empty Document. The PDDocument class that belongs to the package org.
- Step 2: Saving the Document.
- Step 3: Closing the Document.
What is the use of PDFBox?
The Apache PDFBox® library is an open source Java tool for working with PDF documents. This project allows creation of new PDF documents, manipulation of existing documents and the ability to extract content from documents. Apache PDFBox also includes several command-line utilities.
Is PDFBox open source?
Apache PDFBox is an open source pure-Java library that can be used to create, render, print, split, merge, alter, verify and extract text and meta-data of PDF files.
How does Apache PDFBox work?
How do I open a PDF with PDFBox?
Read All Text from PDF Document using PDFBox 2.0
- Step 1: Load PDF. Load the pdf file into PDDocument PDDocument doc = PDDocument.load(new File(“sample.pdf”));
- Step 2: Use PDFTextStripper.getText method. Get the text from doc using PDFTextStripper. String text = new PDFTextStripper(). getText(doc);
How do I add a picture to a PDF on PDFBox?
PDFBox – Inserting Image
- Step 1: Loading an Existing PDF Document.
- Step 2: Retrieving a Page.
- Step 3: Creating PDImageXObject object.
- Step 4: Preparing the Content Stream.
- Step 5: Drawing the Image in the PDF Document.
- Step 6: Closing the PDPageContentStream.
- Step 7: Saving the Document.
- Step 8: Closing the Document.
Is Apache PDFBox safe?
Is PDFBox thread safe? No! Only one thread may access a single document at a time.
Can iText 2.1/7 or earlier be used commercially?
We do not recommend the use of versions prior to 5.1 for commercial projects as your company could be liable for copyright or IP infringements. Of course, this seems a warning only. Discouragement of not using iText with earlier version due to Technical reasons could be understood but Legal reasons are not worth.
How to create a PDF document using PDFBox?
This small sample shows how to create a new PDF document using PDFBox. This small sample shows how to create a new document and print the text “Hello World” using one of the PDF base fonts.
What do you need to know about PDFBox 5?
PDFBox 5 The Portable Document Format (PDF) is a file format that helps to present data in a manner that is independent of Application software, hardware, and operating systems. Each PDF file holds description of a fixed-layout flat document, including the text, fonts, graphics, and other information needed to display it.
What can you do with Apache PDFBox library?
Please help improve it by replacing them with more appropriate citations to reliable, independent, third-party sources. Apache PDFBox is an open source pure- Java library that can be used to create, render, print, split, merge, alter, verify and extract text and meta-data of PDF files.
How many lines of code are in Apache PDFBox?
Apache PDFBox is an open source pure- Java library that can be used to create, render, print, split, merge, alter, verify and extract text and meta-data of PDF files. Open Hub reports over 11,000 commits (since the start as an Apache project) by 18 contributors representing more than 140,000 lines of code.
|
OPCFW_CODE
|
A comment from williams on a prior post intrigues me. S/he asserts:
My opinion is that reviewers should assess the scientific merit of a grant in itself (impact/significance/ PI’s scientific potential). Period. If an investigator has had an interval of no support that is irrelevant to the science she/he is proposing to do at this time in point.
In theory, I would agree. In theory.
The trouble is that this sort of assumes "all else equal". And I just don't see anything other than vanishingly rare circumstances meeting this standard.
Now to sidetrack for a little bit, the discussion was mired down by the term "failed grant" which to me means an interval of funding that has resulted in some less-than-productive outcome. arzey had said
You can almost always learn more about how to do an experiment from a failed experiment than from a successful one. Or to paraphrase: all happy grants/PI’s are alike, every unhappy grant/PI has lots to teach us.
The commenter williams, however, seems to be obsessed with failed grant applications, i.e., those which do not win funding. A different matter, in my view and somewhat less interesting. Of course we should learn from our applications that fell short. With the comment I started with, we perhaps reach a happy middle ground.
The Investigator component of grant review looms large. Despite the fact that formally speaking the NIH award is to a University and that PIs and other staff could be swapped around willy-nilly in theory, the participating investigators are a huge factor. A large part of the assessment of Investigators is the publication record of the participating staff. Not just mere publications, either, but oftentimes an assessment of how publications fit in with the prior and ongoing research support. If you have relatively little research support, well then reviewers are going to extend more latitude for a publication record that is less than expected. If you have a huge amount of funding, these expectations can rise.
Now let us acknowledge that williams is correct that grant review is supposed to focus on the qualities of the current plan. A good grant score on a new application (i.e., not a continuation of an existing project) is not supposed to be an award for doing well in a prior interval of funding. It is supposed to be an informed prediction that the current proposal will be successful if funded. Likewise, an excellent grant application should not be penalized for apparent deficits in the past work of the participating investigators under a different award.
Herein lies the trouble. Most people assume that past performance of an investigator is a good predictor of future performance, leaving the specifics of a given scientific plan aside. So there is an overwhelming tendency to view a wildly successful prior interval of funding (aka, "track record") as a good predictor that the current application will result in similar success.....just because of the PI. Similarly, there is a tendency to view a lack of success in a prior interval as a prediction that the current application will not go well.
Favorably disposed reviewers will be looking for some reason to excuse what looks like dismal past performance, I hasten to reassure you. In fact, I have recommended before that if there is some apparent deficit in your own track record, you do what you can to provide an explanation in the application. Subtly. The advocate reviewer needs something to work with. I have personally seen all sorts of explanations, not excuses, explanations in grants. They can go down pretty well. Everything from trailing spouse issues to health conditions to child bearing to local weather disaster to local institutional screwage. If the reviewers like your proposal, they are looking to come up with reasons why your past interval of suboptimal performance does not hold as a predictor of your future performance leading the project.
All this does, however, is confirm the validity of the more general assumption that past performance predicts future performance. Right?
So how would we view an interval without funding? Here we are talking a mid to late career investigator who has had prior support, say, and then has gone for some serious interval (say 2+ years) without a grant. Should the Investigator criterion take a hit? Is this any different from going too long without any scientific publication? Does it, at some level, say something about an investigator if s/he is unable to keep the lab funded*?
I think it has to be a consideration. It is as much a part of the assessment of Investigators as any other traits and credits, in my view.
* to be clear I am talking relative to expectation. There are going to be some job categories for which continual funding is not an expectation. And some for which it is something more than an expectation. Let us not get distracted yelling about subfield differences and job types.
|
OPCFW_CODE
|
Simon Runc Edit (04/04/2018)
Since first writing this post, mapping has got much easier/better in Tableau; with Shape Files loading directly into Tableau (so no need to manually create Polygon files) and many more geographies, natively, supported. It's set to become even easier with Spatial Joins, and Spatial/Geometry Data Type (in Database) support coming soon. I also wanted to point everyone towards this amazing resource from Craig Bloodworth ...making things even easier!! (big thank you to Craig)
Simon Runc Edit (15/05/2015)
Since posting this article I have learned a lot more about custom shapes in Tableau and a bit more (I stress the ‘bit’ here!) about QGIS. Mainly driven by the questions/issues posted below, when after following the original article your maps end up looking like….
As such I have now created a further attachment (.doc), with the help of the below posts from Jon Foote and Shine Pulikathara , compiling their methods for cleaning up shape files (many thanks for your input). Using QGIS, manually amending the Polygon Data File, or via Tableau (the attached .twbx [Tableau 9.0] corresponds to the Tableau technique in the .doc file).
Simon Runc Edit (15/11/2015)
One of the major reasons the above occurs are where their are islands. The problem occurs as the Path Values we create, if a 'geography' encompasses several islands, so our path goes (1st Island)1,2,3...100 (second Island) 101,102... Tableau will plot 100 ->101 across the sea! To get around this we need to create 'Sub-Polygons' these are plotting levels so that the Path Values re-starts for each island. alexander.turner.0 has written some VBA script (and kindly shared) to do exactly this...and can be found here
In addition to these techniques, I have also found some other resources where (free) TDE polygon files are available (which is by far-and-away the easiest way!! assuming you can fin them for your required geography)....
Miso Maps (for the UK have got beautify, clean, polygon TDEs, with all UK levels from Zones to Local Output Area)
Tableau Mapping BI (lots of TDEs of Maps around the world)
For UK Geographies...don't mess about with QGIS/Shapefile!!, just use this amazing piece of work from Craig Bloodworth Just (unzip and) copy the files onto your 'My Tableau Repository', and Tableau's 'built-in' geographic splits become all the UK levels you could ever want (all as point and area)
However if you can't find what you are looking for (free and pre-built) and can only get Shapefiles of your required geography the below original post (along with the techniques in the .doc) should you what you want.
I will point out that I’m no GIS expert, in fact I only use QGIS for this purpose. As such I've provided below a ‘do this, do this, do this…’ explanation that shouldn't matter if you understand what the GIS is doing or not!
Although Tableau has some in-build Geographic data for the UK [such as Post Codes, Cities, Counties…etc.] it doesn't contain administrative boundaries. There are also some countries which don’t have state boundaries defined.
However Tableau allows us to create our own custom boundaries, known as Polygons. Polygons are created from a file that contains the points [in Longitude and Latitude] that make up the ‘point-extent’ of each boundary, and the order in which to draw them
Once the file is created it is fairly straight forward to create custom polygons in Tableau
…but first the complicated part! How do we create the file?
Get the ShapeFile
In my example I’m going to create Ward (Administrative) boundaries for a Council in the UK.
Office of National Statistics [ONS] provides many ‘geographic’ layer files (free of charge) for use in many disciplines. Here is the Link to ONS download section of boundary ShapeFiles: https://geoportal.statistics.gov.uk/geoportal/catalog/content/filelist.page
We need the point polygon co-ordinates from the Shape File, which means we need to process this file through a GIS application. I will use QGIS (which is Open Source…aka free!). Here is the download page https://www.qgis.org/en/site/forusers/download.html
For this example, I've download the file called ‘Wards_(GB)_2013_Boundaries_(Full_Extent).zip’ from the ONS Geoportal [this is in the Boundaries section], and extract the files
Open ShapeFile in QGIS
Open QGIS, Click on ‘Add Vector Layer’ and browse to SHP file [it will be the biggest one], and Open. This will bring up all the Ward Boundaries for the UK
First task: If the Co-ordinates for this file aren't in Longitude and Latitude, we need to create them. As I don’t need the whole UK, I can select the Wards I need using the selector [hold CTRL to select multiple Wards].
Tip: Use the Label to add Ward names to map [Ward name field is called ‘WD13NM’], unless you are geeky enough to know the ONS ward codes!
Convert Projection to Longitude/Latitude
Right-click on the layer name in layer list and select "Save as ..." and the target CRS to be WGS 84 [EPSG:4326]. This is the Projection for Longitude and Latitude Coordinates, and set up the ‘Save As...’ as per the below screen shot.
Key Settings are;
File Type ‘ERSI Shapefile’, Selected CRS is WGS 84 [EPSG:4326]
I've renamed the files with LL append, but obviously this is up to you
I've also clicked ‘Save only selected features’ to keep the file size down [unless you need entire of UK]
We can then add this new layer to map [using the ‘Add Vector Layer’ as before]
Extract Nodes from Shape File
Select the new file in the Layers pane
Go to Vector > Geometry Tools > Extract nodes
Save the Shape file to a new name. I've called it ‘Wards Polygons ShapeFile.shp’, selecting ‘Add Results to Canvas’
Ensure the New ‘WD_DEC_2013_GB_BFE_LL’ file is selected. As per screen shot below
Once it’s done, close the dialogue box
The new Geometry file will now appear in the Layers pane
Add Geometry Columns
Select the new file in the Layers pane
Go to Vector > Geometry Tools > Export/Add geometry columns (see screen shot below)
Save the Shape file to a new name. I've called it ‘Wards Polygons ShapeFile_Geometry.shp’ selecting ‘Add Results to Canvas’
Ensure the New ‘Wards Polygon ShapeFile’ file is selected
Once it’s done, close the dialogue box
The new Geometry with Columns file will now appear in the Layers pane
Create CSV of nodes
Right Click the new ‘Geometry with Columns’ layer [from Layers pane], and select ‘Save As…’, with the following settings
Format = Comma Separated Value
Choose Name for CSV
CRS = WGS 84
Layer options should default to the correct settings, but check against screen shot that you have Geometry AS_XY
Screen Shots below
This will give a CSV with Longitude/Latitude columns for the extent of each ward
Getting the file into a Tableau Format
Tableau needs a plot order for each polygon. This can be done in any ETL, SQL…I've given how to do it in Excel (as this seems the most used!)
Open the file in Excel and add a 'Sequence' Column. The Polygon Nodes are in the correct order so it’s just a question of a formula. In the first sequence cell I've added the formula =IF(E2<>E1,0,I1+1). As per the screen shot below, and copy this down for Column I
Rename X and Y fields to ‘Longitude’ and ‘Latitude’, and rename ‘WD13NM’ as ‘Ward’
You will then end up with a data file looking like this. btw you don't need to create a numeric ID, and can just use Ward Name (I'm so used to using numeric Keys, and is more efficient when joining data...etc. [Star Schema, Fact Tables...etc.] on larger scales that I can't help myself!)
Plotting Custom Polygons in Tableau
Load in New File into Tableau
Convert ‘Sequence’ to Dimension
Add Longitude to Columns, Latitude to Rows
Ward Name and Sequence into detail Pane, and change Sequence to ‘Path’ [use drop down to left of Blue Pill]
I've then dragged Ward into the Colour Marks, for greater visibility
We can then ‘Blend’ on the Ward name, to other Data, to use that data within our new custom areas [NB. The Polygon file needs to be the Primary Data source]
And Voila Custom Polygons from ShapeFile in Tableau.
I hope people find this useful
|
OPCFW_CODE
|
Storing arbitrary "governance documents" in Plugin state
Each Plugin has a one-to-one relationship with a state model, which is just a datastore that can be used to store any data for the life of the plugin. (Life of the plugin = from when it's activated for the community, until it's deactivated or the community is deleted). I originally added this because I wanted to store data that was fetched on initialization so that it can be accessed later on. I was thinking of state as something that can be re-created on plugin init.
HOWEVER, I realized I could just use the state to support the web monetization revshare use-case. Here's how: https://github.com/metagov/metagov-prototype/blob/2eb8931212ed705fff3de382d1f8943d700530f4/metagov/metagov/plugins/webmonetization/models.py
Instead of the actions performing actions on an external platform, the actions are updating the state of the plugin itself. Instead of the resource functions getting data from an external platform, the resource functions are returning data from the plugin state.
I think this is a fine approach to use for small structured documents. I don't think we need to go beyond this right now. If we see it being repeatedly used in a common way, maybe we'll want to add an abstraction specifically for creating/updating/deleting structured documents.
Q&A about using plugin state to store "documents":
1. Can the Driver govern changes to the document?
Yes. In the revshare example, the document is changed by invoking a metagov action (eg POST /api/internal/action/webmonetization.remove-pointer). Only the Driver is able to invoke that request, so, naturally the Driver can "govern" that action however it wants to.
2. Can the Driver govern access to the document?
Same as (1) –– the Driver is the only thing that can access the /resource endpoint, so yes. Unless...
3. Can we allow public access to the document?
This is really a separate issue. As mentioned on https://github.com/metagov/metagov-prototype/issues/15 we are considering making all /resource endpoints totally public, or allowing plugin authors to declare whether resources are public (anyone on internet) or private (driver only).
Changes to make:
Currently we destroy and re-create the plugin (and its state) whenever the plugin config changes (code). Instead of deleting and recreating plugin instances, we should have plugin authors implement apply_config or some pre-save hook that lets them choose what to do when the config changes. Likewise, we should just mark the plugin instance as inactive when a community deactivates it, instead of deleting the instance.
Excellent.
I think this is a fine approach to use for small structured documents. I
don't think we need to go beyond this right now. If we see it being
repeatedly used in a common way, maybe we'll want to add an abstraction
specifically for creating/updating/deleting structured documents.
Agreed.
Let's make sure this gets in to the documentation at some point.
On Fri, Mar 26, 2021 at 4:00 PM Miriam Ashton @.***>
wrote:
Each Plugin has a one-to-one relationship with
https://github.com/metagov/metagov-prototype/blob/2eb8931212ed705fff3de382d1f8943d700530f4/metagov/metagov/core/models.py#L63-L67
a state model, which is just a datastore that can be used to store any
data for the life of the plugin. (Life of the plugin = from when it's
activated for the community, until it's deactivated or the community is
deleted). I originally added this because I wanted to store data that was
fetched on initialization
https://github.com/metagov/metagov-prototype/blob/2eb8931212ed705fff3de382d1f8943d700530f4/metagov/metagov/plugins/opencollective/models.py#L51
so that it can be accessed later on. I was thinking of state as something
that can be re-created on plugin init.
HOWEVER, I realized I could just use the state to support the web
monetization revshare use-case. Here's how:
https://github.com/metagov/metagov-prototype/blob/2eb8931212ed705fff3de382d1f8943d700530f4/metagov/metagov/plugins/webmonetization/models.py
Instead of the actions performing actions on an external platform, the
actions are updating the state of the plugin itself. Instead of the
resource functions getting data from an external platform, the resource
functions are returning data from the plugin state.
I think this is a fine approach to use for small structured documents. I
don't think we need to go beyond this right now. If we see it being
repeatedly used in a common way, maybe we'll want to add an abstraction
specifically for creating/updating/deleting structured documents.
Q&A about using plugin state to store "documents":
1. Can the Driver govern changes to the document?
Yes. In the revshare example, the document is changed by invoking a
metagov action (eg POST
/api/internal/action/webmonetization.remove-pointer). Only the Driver is
able to invoke that request, so, naturally the Driver can "govern" that
action however it wants to.
2. Can the Driver govern access to the document?
Same as (1) –– the Driver is the only thing that can access the /resource
endpoint, so yes. Unless...
3. Can we allow public access to the document?
This is really a separate issue. As mentioned on #15
https://github.com/metagov/metagov-prototype/issues/15 we are
considering making all /resource endpoints totally public, or allowing
plugin authors to declare whether resources are public (anyone on internet)
or private (driver only).
Changes to make:
Currently we destroy and re-create the plugin (and its state) whenever
the plugin config changes (code
https://github.com/metagov/metagov-prototype/blob/2eb8931212ed705fff3de382d1f8943d700530f4/metagov/metagov/core/serializers.py#L40-L49).
Instead of deleting and recreating plugin instances, we should have plugin
authors implement apply_config or some pre-save hook that lets them
choose what to do when the config changes. Likewise, we should just mark
the plugin instance as inactive when a community deactivates it, instead
of deleting the instance
https://github.com/metagov/metagov-prototype/blob/2eb8931212ed705fff3de382d1f8943d700530f4/metagov/metagov/core/serializers.py#L82-L83
.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/metagov/metagov-prototype/issues/16, or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACHA5PG5JTRE73UC2DRHQRTTFTRUXANCNFSM4Z4AHT3Q
.
--
http://www.joshuatan.com/research/
|
GITHUB_ARCHIVE
|
Link with Countless other learners and discussion Tips, explore course substance, and acquire help mastering concepts.
Personal assemblies must be uploaded in a very bin folder from the functionality directory. Reference the assemblies using the file title, including #r "MyAssembly.dll". For info on the best way to upload data files in your function folder, see the portion on bundle administration.
While in the age of the web and social networking, Many of us come to feel lucky if they have got day trip to go for a stroll, let alone sit down to study a book. Nonetheless it's simple that Studying R in depth
One of the best strategies to consolidate Discovering is to jot down-it-up and go within the expertise: telling the Tale of Whatever you’ve learned with also help Many others.
On the other hand, In case your method doesn’t rely on dynamic functions and that you come from the static planet (particularly, from a Java way of thinking), not catching these types of "errors" at compile time may be surprising.
Optional typing is the idea that a system can work Even though you don’t put an explicit kind over a variable. Staying a dynamic language, Groovy The natural way implements that characteristic, for example any time you declare a variable:
Any R code in the Execute R Script module will execute after you run the experiment by clicking to the Run button. When execution has concluded, a Verify mark will surface about the Execute R Script icon.
Optimal allocation in several routes
To upload a project.json file, use among the methods described within the How you can update function application data files from the Azure Functions developer reference subject matter.
This 7 days covers the way to simulate info in R, which serves as the basis for doing simulation studies. We also deal with the profiler in R which lets you acquire thorough info on how your R functions are jogging and to establish bottlenecks that may be tackled.
This part will take you through some Essentials of interacting Together with the R programming language during the Device Discovering Studio ecosystem. The R language offers a strong Resource to build customized analytics and details manipulation modules throughout the Azure Device Finding out ecosystem.
Soon after studying and applying this information, you'll be snug employing and applying R to the specific statistical analyses or speculation tests. No prior familiarity with R or of programming is assumed, although you need to have some encounter with stats.
`Master R in per day' presents the reader with essential programming abilities by an examples-oriented technique and is ideally suited to lecturers, experts, mathematicians and engineers. The book assumes no prior familiarity with Pc programming and progressively addresses all the critical techniques needed to turn into self-confident and proficient in utilizing R in a working day.
Even though it can be done that your neighborhood Pc lab previously has R, it truly is most handy to complete analyses by yourself equipment. In this case you have got to download the R application in the R project and install it yourself. Using your preferred Website browser, Visit the R dwelling web site at after which select the Download from CRAN (Comprehensive R Archive Community) selection. This may take you to definitely list of mirror web pages around visit the website the world.
|
OPCFW_CODE
|
Code-Generation Techniques for Javaby Jack Herrington
Working in Java either means writing a little bit of complex code or writing a lot of gruntwork code. J2EE is a prime example; implementing the persistence for a single database table takes five classes and two interfaces using EJBs, and almost all of the classes are clerical work. We have to write them, but we don't have to do it by hand. Code-generation techniques can make building high-quality EJB code a breeze.
Will code generation revolutionize computing and change the way we develop forever? Yes, but it will take a while. Software engineering has always concentrated on increasing our level of abstraction. In the beginning, we hand-wrote machine code; then we created assemblers and macro assemblers. After that, we created Fortran and compiled our code into assembler. Then came structure programming, and after that, object-oriented programming. With each step, we have increased our level of abstraction and, thus, our ability to create higher quality applications with more functionality, more quickly.
What is Code Generation?
What is this panacea for developers called code generation? Code
generation is the technique of writing and using programs that build
application and system code. To understand code generation, you need to
understand what goes in and what comes out. What goes in is the design for the
code in a declarative form: "I need two tables named
author with these fields." What comes out is one or more target
files. It could be Java code, deployment descriptors, SQL, documentation, or
any type of controlled output.
Figure 1 shows the basic form of today's code generators:
Figure 1. The process of code generation
The components can change slightly between the different models, but the song remains the same. The code generator reads in the design, then uses a set of templates to build output code that implements the design. The separation between code generation logic in the generator and output formatting in the templates is akin to the separation between business logic and user interfaces in web applications.
Code generators are not wizards. Wizards are passive generators. They write code once, and then it's up to you to maintain the code forever. Code generators are active. They continually maintain code over multiple generation cycles. As the designs change, the input to the generator changes, and new code is created to match the design. This is a key advantage — when have you been on a project where the requirements don't change?
What Are the Benefits?
Before we get into specific examples of code generators for Java, let's make sure we have the end goals firmly in mind. One way to approach this is to think about the qualities we want in an optimal generator.
- Quality: We want the output code to be at least as good as what we would have written by hand. Thankfully, the template-based approach of today's generators builds code that is easy to read and debug. Because of the active nature of the generator, bugs found in the output code can be fixed in the template. Code can then be re-generated to fix that bug across the board.
- Consistency: The code should use consistent class, method, and argument names. This is also an area where generators excel because, after all, this is a program writing your code.
- Productivity: It should faster to generate the code than to write it by hand. This is the first benefit that most people think of when it comes to generation. Strangely, you may not achieve this on the first generation cycle. Thankfully, the real productivity value comes later, as you re-generate the code base to match changing requirements; at this point you will blow the hand-coding process out of the water in terms of productivity.
- Abstraction: We should be able to specify the design in an abstract form, free of implementation details. That way we can re-target the generator at a later date if we want to move to another technology platform.
Now that we understand that benefits that we want, and how those are addressed by code generation techniques in general, we should understand what we expect to use code generation for in the Java context.
What We Expect the Generator to Handle
The output files of a generator are called the target files. There are several generation targets within the Java enterprise application stack. Figure 2 shows the stack:
Figure 2. J2EE generation targets
All four of these elements of the stack are potential generation targets, but some are more common than others. From the bottom to the top:
- Database: Given Java's object-persistence approaches to database work, there isn't much call for direct generation of SQL for database code or stored procedures. However, if this is your architecture, you can use the custom approaches listed below to generate the required code.
- Persistence: Database persistence code is the most common generation target in the Java environment. All of the generators I refer to in the sections that follow build persistence code. Why? It's generally redundant grunt code. Generated database-persistence code also is an excellent foundation for a solid application, because it is consistent and relatively bug-free.
- Business Logic and User Interfaces: Only MDA and custom generators build production business logic and user interfaces. The critical factor in generating this code is building on top of a stable, predictable platform, ideally a generated persistence layer.
It's obvious that code generation is powerful and can build useful code, but does it have drawbacks?
What to Look Out For
Code generation is not without pitfalls and detractors. One of the most common complaints is that code that was once active is now being hand-modified and thus cannot be re-generated. One trick is never to check the generated source into the code base. This ensures that engineers will always be required to use the generator as part of the compilation process. This keeps the generator alive and keeps engineers from modifying the output code.
Another problem is that engineers who have been around for around since the early 90s liken code generators to Computer-Aided Software Engineering (CASE) tools. The comparison is mistaken because code generators are developed bottom-up by engineers for engineers. CASE tools were developed as a top-down replacement for programming languages and for engineers.
There are more reasons that engineers are skeptical about generation. Some issues are technical and others are cultural. Some times it comes down to simple job preservation. These tend to be situation-specific and boil down to simple issues: trust, teamwork, and education. In order to successfully deploy a generator, the team must trust the tool. They must feel that they have some control over the tool and its implementation. They also need to know how the tool is used both at a basic level (e.g., How do I run it?), and at a specific level (e.g., How do I specify when I need a table with a compound primary key?).
Perhaps the biggest drawback of code generation is that it falls to the implementer of the tool to ensure successful adoption within the team. If you put a copy of the code generator on the server and expect that people will immediately understand its use and the compelling value, then you are sure to fail. Education and empathy are key.
Given an understanding of which Java application components we can generate and what we have to look out for, let's talk about the generators that build them.
|
OPCFW_CODE
|
The Marketplace team reserves the right to make small changes to text fields and images uploaded to the app's listing in order to give the best possible user experience. You'll be notified when something is edited.
App approval process for first time app review takes around 2-3 weeks.
To update your approved app, head to Settings > (company name) Developer Hub and click on the name of the approved app.
From here, you can make changes to your app and its Marketplace listing page. For example, you can update scopes, add app extensions, change the app listing’s images, video (YouTube) link, descriptions and more.
Depending on the fields you change/edit, your app may have to go through the app approval process again.
A critical field in your app registration form is an aspect of your app that can affect how it works for users, for example, the callback URL, scopes and app extensions. Editing these fields will require you to go through the app approval process again to apply the changes.
A non-critical field is an aspect that doesn’t affect how your app works, for example, your app listing’s images, YouTube video link and support and legal information.
Here is the list of critical and non-critical fields in the app registration form for approved apps:
|Critical fields||Non-critical fields|
– Callback URL
OAuth & access scopes
– Access scopes
– Installation URL
– Link actions
– JSON modals
– JSON panels
– Custom modals
– Custom panels
– Custom floating window
– App settings page
– App category
– App name
– Short summary
– App icon
– App listing images
– Full description
– YouTube video link
Setup and installation
– Instructions for users
Support and legal info
– Website URL
– Terms of Service URL
– Pricing page URL
– Support URL
– Support email
– Documentation URL
– Issue tracker URL
App review info
– Main contact email
When you edit a critical field in your app and click “Save”, you will be prompted with a confirmation dialog.
Should you wish to continue editing your app, click “Continue”. You will be returned to your Developer Hub dashboard, where a private copy of your app with pending changes will be created.
The changes will be live immediately when you edit a non-critical field and save your app. This action will also send an automatic notification to the Pipedrive Marketplace team so that we can keep an eye on all changes made to approved apps.
As a user can either accept or deny all scopes; therefore, remember to only request absolutely necessary scopes for your app’s use case.
You can edit your app’s access scopes in Developer Hub. As it’s a critical field, here’s what happens when you change your app’s scopes:
- A private copy of your app with pending changes will be created
- You can now add/remove scopes in the OAuth & access scopes tab
- Your app, with its pending scope changes, will be sent for review
- Once the changes are approved, you can merge them into your original app
- The changes will be live immediately after the merge and applied to your app’s users
- Your private app copy with pending changes will be deleted after the merge
All new installations will have your new scopes. The scopes displayed in the installation screen of the app listing will also be automatically updated to reflect the new scopes.
Note that adding new scopes will not halt the app’s functionality – the features built with previous scopes will continue working – while removing scopes may disrupt your app’s functionality.
After your app’s changes are merged, user reauthorization is required:
- All existing users who have your app installed will receive an email from Pipedrive informing them about the change in scopes. The email will state what information your app requests access to and ask the user to reauthorize the app.
- The old access tokens will be invalidated and can be refreshed with the old refresh tokens. The refresh tokens will remain valid for the old scopes until the user reauthorizes the app.
- The new scopes will only be accessible after the user reauthorizes the app. Users should also be guided from the app’s side to reauthorize the app.
If you only removed scopes, you will only need to refresh the tokens.
The app name is a non-critical field in Developer Hub. If you need to change your app name, there’ll be no additional reviews from the Marketplace Team. The only change will be your app’s URL in the Marketplace, which may affect your Marketplace app listing’s Search Engine Optimization (SEO).
When you edit critical fields in your app, save it and click "Continue" in the confirmation dialog, a private copy of your app with pending changes will be created.
You will be returned to your Developer Hub dashboard, where you’ll see your private app copy underneath your approved app. Click on your private app copy’s name to continue making any changes to your app.
To publish your pending changes, you need to send your app for review again. Click “Save” and confirm your email address to send your app copy with its pending changes for approval.
NB: Your app's
client_secretwill not change after you merge its pending changes. Your app will retain the same
client_secretthat it originally had.
When your app’s pending changes are approved, you can merge and publish them to your original app via the “Merge changes” button or the “Merge changes” option in the three-dot menu. The changes will then be applied to all users.
When you click “Merge changes”, you will be prompted to confirm your action. Once confirmed, the changes will be merged into your original app, and your private app copy with pending changes will be deleted.
Your app has to be approved and public for you to unpublish it. To unpublish your app, go to three-dot next to your approved app’s name and click “Unpublish”.
When you unpublish your app, it will be hidden from the Marketplace without uninstalling it for existing users. Your app’s Marketplace listing page will remain available for anyone with its direct link.
The app will be visible only in the Marketplace Catalog list view for the app owner’s Pipedrive company and the Marketplace team. Remember that anyone with the link to your app can still view and install it.
If the Marketplace team decides to unpublish your app, it will be hidden from the Marketplace but remain installed for existing users. Similarly, anyone with the link to your app can still view and install it.
The Marketplace team will contact and inform you of the issues you must resolve before the app can be published again.
Updated 10 days ago
|
OPCFW_CODE
|
Social Enterprise What, Why and How? Part 2
Starting from where i left off in my last blog on Social Enterprise: What, Why, How?, in this blog we look at the key aspects of Social Enterprise which help an enterprise worker
Aspects of E2.0
E2.0 adoption is reliant on the “new workforce”. Are the new workers going to be so different than the well established norms of business?
What’s “new” in this new workforce.
John has been hired as a shipping expert at ACME, a global shipping company.
He had shipping as a minor in his academic curriculum.
John wants to easily access, share and extend data and maybe from a “Social” context.
ACME wants John as productive ASAP with optimal investment.
Given the above scenario, how do aspects in E2.0 help ACME and John meet their goal?
Aspects of Enterprise2.0 for John
As captured in my earlier blog on Social Enterprise, E2.0 has multiple aspects, some of which we will try to delve deeper into.
1. Information Discovery
John needs information to be readily accessible. There is heaps of intelligence built into ACME in the form of people, documents and process.
How can John consume so much so fast?
The probable answer is the information discovery has to be “as needed” by him.
We cannot overload John with all the terabytes available. So we need to make available to him the information he needs.
Whether the above information discovery is a “PUSH” from ACME or a “PULL” by John does not change the nature of the system.
The system should be intelligent to disseminate the information as requested.
Social context works on signals. Humans also share signals between themselves to make things work. Whether it is a phone call, email, etc signals have existed forever.
There are two aspects of signals.
John could subscribe to any signal source (person, system, group) and listen in (RSS is an example). He could learn light-years of industry knowledge using trusted signals.
Also, he could apply this knowledge to his routine work. The synonym to signals is “noise”. E2.0 has to improve the signal to noise ratio, else John will end up making costly errors. Or he will find the system useless.
John needs to be available to connect with the experts, peers, customers, content for problem solving. ACME has to make sure this is easily possible.
Social Networks can be an example. To get to a colleague, a friend, or to a source of information should not be painful.
Connections could be based on peer recommendation, degree of separation, or as simple as a free search.
3. Object Context
For John to be effective at his job, John needs to apply the “Information” he discovered and the “Signals” he is subscribed towards a business activity.
Activities have objects associated to them. E2.0 capabilities will be relevant to John when he can apply them to an activity he is working on.
e.g. A shipping piece has gone missing and John has to act.
The shipping packet ran through multiple processes and multiple organizations. To be able to satisfy the “need to deliver the shipping package” on time, John needs an easy way to access the history/context of the shipping package as it flows through the enterprise chain.
Context cannot be only structured data. Multiple informal unstructured interactions by the handlers at each stage, could give very “RICH” data to John
The object context could be leveraged in information discovery and signals to make the experience for John, ACME and customers more rewarding.
The conversation wrapper associated to a structured enterprise object will lead to a more efficient consumption.
E2.0 will be a reality. Some already exist in patches. How we apply aspects of E2.0 and many more could potentially change the experience for users to use the system. Bottom line, each aspect of E2.0 needs to be applied in the business context it has to live in.
|
OPCFW_CODE
|
Testify is a test framework framework. Think “Rack for testing.”
Like Rack, Testify is typically going to be run as a stack of applications that implement a call method. In typical usage, the stack will be assembled by a Runner and consist of a Framework and optionally some middleware, but this isn't a requirement.
A Testify Application is any object that implements a call method accepting a single argument. The one argument will be a hash (canonically called env), and the return value should be an array of TestResult objects.
The env Hash
The env hash should be an instance of Hash with one of the following keys defined:
an array of filenames containing the tests
a path containing the files containing the tests
In addition, all of the following keys should be defined:
An array of integers representing this version of Testify
A hash of callbacks - see below.
Given that Testify is still at a fairly early stage of development, it is quite possible that more required keys will be added later.
Middleware will generally either modify the env hash before the framework gets it (eg, to execute only a subset of tests), or modify the TestResult objects on the way back out (eg, to colorize the result text). If you want to respond to a particular event as it happens (eg, send a Growl notification when a test fails), it is probably best to do this by adding a hook rather than looking at the returned TestResult objects, since call() will not return until all the tests have finished running.
A Framework object is responsible for actually running the tests and generating TestResult objects. Frameworks are required to respect all of the hooks described below. Generally, Frameworks should inherit from Testify::Framework::Base and should have at least one alias in order to benefit from all of Testify's built-in functionality, but neither of these is a requirement. (See the Aliasable module in the Classy gem for a more thorough description of aliases. github.com/djspinmonkey/classy)
At least initially, most Framework classes will likely be adaptors for existing test frameworks (RSpec, Test::Unit, etc).
env is a hash containing arrays of callable objects (usually Procs, but anything responding to call will work) which will be called in specific circumstances. The following keys should supported:
Will be called before running any tests. An array of Test objects will be passed in. A middleware app might use this hook instead of simply running immediately if it needed access to individual tests instead of just files, or if it needed to run after all other middleware. For example, if some other middleware later in the stack were excluding some subset of tests from running, you might need to be sure you were operating only on the tests that would actually be run.
Will be called after running all tests. An array of TestResult objects will be passed in. Middleware setting this hook should probably just work with the array of TestResults returned from calling the next Testify app on the stack instead, unless it needs to run before the results pass through any other middleware. Note that there is no guarantee another hook may not be added prior to yours, however.
Will be called before running each individual test. A single Test object will be passed in.
Will be called after running each individual test. A single TestResult object will be passed in.
Points to a hash containing symbols as the keys, and arrays of callable objects to be called whenever a TestResult is generated with the status corresponding to that symbol. The TestResult will be passed in.
If using the values provided by Testify.env_defaults, values for the first four keys will be initialized to empty arrays, and :after_status will be initialized to an empty hash.
Like the env hash, this is highly subject to change at this point.
Write adaptors for common test frameworks (rspec and minitest first, probably).
Write an autotest-like Runner
Write some useful middleware (growl notifications, colorized output, etc)
Note on Patches/Pull Requests
Fork the project.
Make your feature addition or bug fix.
Add tests for it. This is important so I don't break it in a future version unintentionally.
Commit, do not mess with rakefile, version, or history. (if you want to have your own version, that is fine but bump version in a commit by itself I can ignore when I pull)
Send me a pull request. Bonus points for topic branches.
Copyright © 2010 John Hyland. See LICENSE for details.
|
OPCFW_CODE
|
- I noticed that on the Nokia E71, the latency test wasn't running. After much debugging (this browser doesn't have a firebug equivalent), it turned out that the browser will not fire any events on an image object if the HTTP response's content length is 0.
Since I was using a 0 byte file named
image-l.png, this file's content-type was set to image/png, but its content-length was 0. Most browsers fired the
onerrorevent when this happened, but Nokia's browser which is based on AppleWebKit/413 fires nothing. I then changed the image to return a
204 No ContentHTTP response code, but I had the same problem. The only solution was to send some content. After playing around with several formats, I found that GIF could generate the smallest image of all at 35 bytes, so I used that. I haven't noticed any change in latency results on my desktop browser after the change, so I think it should be okay.
This also means that browsers will now fire the
onloadevent instead of
onerror, so I changed that code as well.
- The second change fixes a bug in the code related to timed out images. This is what was happening.
A browser downloads images of progressively larger size until it hits an image that takes more than 3 seconds to download. When this happens, it aborts the run, and moves on to the next run, or to the latency test.
For the latency check, this meant that the image would never download, and that's why latency would show up as
NaN-- I was trying to get the median of an empty array.
I fixed this by changing the timeout logic a bit. Now a timeout does not abort the run, it only sets an end of run flag. Once the currently downloading image completes, successfully or not, the handler sees the flag, and terminates the run at that point. There are two benefits to this. The first is that this bug is fixed. The second is that we can now reduce the overall timeout since we are guaranteed to have at least one image load. So, the test should now complete faster.
- A third minor change I made was in the timeout values for each image. I've increased it a little for the small images so that the test still works on really slow connections -- like AT&T's 2G network, which gives me about 30-40kbps
ThanksI'd like to thank all my friends who tested this with their iPhones - that's where the timeout+parallel downloads bug was most visible, and there is no way I'd have fixed it without your help. Stay tuned for more posts on what I've learnt from it all.
So, go get the code, run your own tests, read my source code, challenge my ideas. If you think something isn't done correctly, let me know, or even better, send in a patch. The code is on github.
Short URL: http://tr.im/jsbwtest11
|
OPCFW_CODE
|
Open Issues, Gotchas, and Recent Changes
Q: How do I diagnose and/or debug an MPI problem
where the code hangs?
A:Try these steps:
- Try to run with one process; is the error still present? If not, try running
with two processes; is the error still present?
- Try running the job under the debugger (TotalView). When you think the job
is hung, interact with the debugger to determine where the hang is occurring
(i.e., what part of your code or MPI is involved). For example, are you in a
loop sending messages (infinitely), or are you hung because you are waiting for
an event that doesn't appear to happen, such as a message receive?
- Check your MPI environment variables (env | grep
MP_). Are these the right settings for your MPI use?
If none of these things help, give the LC-Hotline more details on what your
code is doing, such as:
- Are you using the vendor's MPI or MPICH?
- Size of messages and number of messages being sent?
- Does this same job run successfully if you run it as a batch job? interactive
job? Are you setting different environment variables with a batch version than
with the interactive version?
- Does your job read input interactively? Is there a message number prefixing
the error message, and what is it? (e.g., 032-xxxx or some such form).
Q: Do we have installation for C++ binding for
mpi++ (on AIX)?
A: The IBM MPI supports C++. There is no mpi++, but there are C++ interfaces
that are provided for C++ codes that make MPI calls. At the present time, IBM
supports only the C compatibility for C++, not the C++ interfaces that were added
for MPI-2. You need to #include <mpi.h>.
You also need to load with the MPI library, which is automatic when you use the
mpCC (C++) MPI script for compiling/linking with the IBM MPI library.
Q: What causes the .mpirun[process num] files to
appear (on AIX)?
A: Using mpirun to launch IBM MPI jobs is the reason for the .mpirun[process
number] files being created. You should be using poe instead. mpirun is for launching
MPICH MPI jobs, not a general purpose parallel job launcher.
Q: Is there any limitation on how many processes
my MPI job can fork on a system?
A: There are no limits other than those imposed by the batch system.
Q: I am using mpigather and receiving error "trying
to receive a message when there are no connections." Why?
A: You were calling MPI_Barrier with MPI_COMM_WORLD, but not all processors
executed this command, so it couldn't get communication from all processors.
Q: I am receiving a run-time error: "mpi Invalid
communicator error". Why?
A: This sort of error occurs when there is a mixture of MPICH header files
with IBM's MPI libraries, or vice versa. As a first step, which version of MPI
do you wish to use on White? The IBM MPI is the recommended version. If you are
using MPICH, it is recommended you use the MPICH scripts (mpicc, mpirun, etc.)
that provide the correct -I and -L paths
and the correct libraries. It is possible that you are using explicit -I or -L options
that are no longer valid; this could result in locating the incorrect header
Q: What is the maximum number of MPI tasks a job
may have? (on Purple, uP machines)
A: Up to 4096 User Space tasks. Up to 2048 IP tasks.
Q: I don't understand this error message: "MPI:
INTERNAL ERROR catalog was closed, or catalog was not initialized". (on
A: Compile with the magic -binitfinipoe_remote_mainlinker flag
that you need for POE applications. This will give informative error messages
that will indicate any linking problems.
Q: I get segmentation fault when I compile with
64-bit mpi. (on Purple, uP)
A: To compile with 64-bit mpi:
- Compile with flags -q64 -qwarn64. This
will tell you about all the illegal conversions from int to pointer and back.
- Add -brtl -L... after -q64 for
the link line only (the -brtl can screw
up normal compiles to object files).
- Do not use the flags -bmaxdata or -bmaxstack in
64-bit mode compilations. In 32-bit mode, these give you more memory. In 64-bit
mode, they restrict your memory usage. In 64-bit mode, the default is unlimited.
- Set environment variable: setenv OBJECT_MODE 64
- You cannot mix 32-bit and 64-bit items, so make sure your entire code has
been compiled with these options. If you are loading any of your own libraries,
they too must be compiled with the 64-bit options.
- Do not explicitly include -lxlf90 -lm -lc in
your link line. These should not be necessary, and it is possible for this to
cause problems (usually link problems). We recommend taking out these unless
there is a good reason to have them.
- Caveat: Most 64-bit codes that get a segmentation fault on White are not
prototyping the malloc routine. Your best bet would be to run TotalView on the
executable and see if the segmentation fault happens in C code. I suspect that
a C library (which you link with) is calling malloc. Look for all C routines
that deal with memory allocation and add # include <stdlib.h> in
them. In 32-bit mode, malloc/calloc/etc. works properly even if stdlib.h is not
included. In 64-bit mode, not having the prototype causes the pointer to get
corrupted, resulting in seg faults. On other 64-bit platforms such as IRIX and
Tru64, they eventually modified their compilers to automatically prevent this
type of error for malloc/calloc/etc. (sort of an automatic prototyping) because
of all the problems caused. Make sure a prototype for array_alloc() that returns
a pointer is visible from everywhere it is being used. Any function that returns
a pointer but doesn't have a prototype visible will cause problems. The compiler
will do the wrong thing every time otherwise.
Q: What does this warning message mean, and how
can I eliminate it: "weak symbol multiply defined"?
A: The -w option to mpiCC is passed to
g++, and it suppresses compilation warning messages only. To suppress the load
(multiply-defined symbol) messages, you should try -Wl,-s.
This will pass the -s to the loader (ld). If
the -Wl,-s option is not satisfactory, you could
also try the g++ option to turn off weak symbol support, -fno-weak.
Q: Are there any web pages or other documents
that might provide "stack size information and strategy" for the users?
A: Read Jeff Fier's excellent summary of thread
Q: What is simultaneous multithreading?
A: Simultaneous multithreading (SMT) is not hard to understand. In traditional
designs, the entire collection of functional units in the CPU belong to one process
at a time. A process can therefore be executing instructions in some or all of
the functional units of the processor and nobody else can be using it at that
instance. With a feature that we called hardware multithreading in the RS64 line
of processors a few years ago, we provided additional hardware resources that
allowed two processes to have their state, essentially, on chip. When a process
had a cache miss that would normally stall it, it would switch to the other process,
the other thread, with a three-cycle pause. So this was still only one process
executing at a time, but could switch back and forth between two of them very,
In SMT we have widened the data path somewhat to allow a thread indicator
on each instruction. So, we actually can fetch from two different instruction
streams and have instructions from two different instruction streams issuing
simultaneously to the different functional units on the chip.
We currently support two threads on the system, and it is a very general-purpose
mechanism. You can have instructions from different threads in different pipeline
stages of the same functional unit just following each other through. And it
provides the ability to use the hardware, the processor functional units much
[Adapted from text provided by John McCalpin, IBM]
Q: What are the expected performance gains for
A: It varies from negative, in some cases, to up to 60% in some cases.
So, it is not at all unusual to see 20 to 30% speedup on applications without
doing anything special.
[Adapted from text provided by John McCalpin, IBM]
Q: I have heard people refer to "the Hypervisor" on
the Purple and uP machines. What is the Hypervisor. Do all LLNL machines have
A: The Hypervisor is software present on IBM POWER5-based machines. Traditionally,
the operating system's job is to provide an interface between the user and the
hardware and to provide protection so that the user cannot access any part of
the hardware in an uncontrolled way. In current directions, especially in server
consolidation projects, one finds that you want to run multiple operating systems
on the same piece of hardware, especially with all of the operating system exploits
and security problems that are happening.
So, what IBM has done is added a new
operating system, essentially, called the Hypervisor, that sits between the operating
systems and the hardware. The Hypervisor is modestly complicated, but it's enough
smaller than an operating system that one can have a lot more confidence about
its reliability. And now when the operating system wants to interact with the
hardware, it has to do it only through the Hypervisor or with the permission
of the Hypervisor. So that you can have, for example, multiple Linux kernels
running on the same hardware, and even if there is a security problem in Linux
and someone compromises the kernel, that kernel is still prevented from interfering
with any of the other partitions on the machine.
[Adapted from text provided
by John McCalpin, IBM]
|
OPCFW_CODE
|
CLID-9,CLID23: feat: hide unnecessary flags
Description
Since v2 is using containers/image as the underlying go mod to mirror images, some of the flags were inherited because of this dependency but they are not necessary needed to be exposed to customers.
This PR hides all the unnecessary flags that are being used currently only internally.
Fixes # CLID-9 and CLID-23
Type of change
Please delete options that are not relevant.
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
./bin/oc-mirror --v2 --help
Expected Outcome
Flags:
-c, --config string Path to imageset configuration file
--dest-tls-verify require HTTPS and verify certificates when talking to the container registry or daemon
--from string local storage directory for disk to mirror workflow
-h, --help help for oc-mirror
--loglevel string Log level one of (info, debug, trace, error) (default "info")
--max-nested-paths int Number of nested paths, for destination registries that limit nested paths
-p, --port uint16 HTTP port used by oc-mirror's local storage instance (default 55000)
--secure-policy If set (default is false), will enable signature verification (secure policy for signature verification).
--since string Include all new content since specified date (format yyyy-MM-dd). When not provided, new content since previous mirroring is mirrored
--src-tls-verify require HTTPS and verify certificates when talking to the container registry or daemon
--strict-archive archiveSize // If set, generates archives that are strictly less than archiveSize, failing for files that exceed that limit.
-v, --version version for oc-mirror
Hi @sherine-k,
I just tested the authfile flag with the following steps:
I tried to run without the flag and with the token expired on my side and I got the failure as expected:
./bin/oc-mirror --config ./alex-tests/alex-isc/clid-14-demo.yaml file://home/aguidi/go/src/github.com/aguidirh/oc-mirror/alex-tests/clid-9 --v2
unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication
Then I ran it with the following mirrorToDisk command:
./bin/oc-mirror --config ./alex-tests/alex-isc/clid-14-demo.yaml file://home/aguidi/go/src/github.com/aguidirh/oc-mirror/alex-tests/clid-9 --authfile /home/aguidi/Downloads/pull-secret.txt --v2
I did not get the previous authentication error.
and then diskToMirror command:
./bin/oc-mirror --config ./alex-tests/alex-isc/clid-14-demo.yaml file://home/aguidi/go/src/github.com/aguidirh/oc-mirror/alex-tests/clid-9 --authfile /home/aguidi/Downloads/pull-secret.txt --v2
Destination registry got the correct images as you can see below:
{
"repositories": [
"devworkspace/devworkspace-operator-bundle",
"devworkspace/devworkspace-project-clone-rhel8",
"devworkspace/devworkspace-rhel8-operator",
"openshift4/ose-kube-rbac-proxy",
"redhat/redhat-operator-index",
"ubi8-minimal"
]
}
Do you think this is enough test to include the authfile? I will put this PR on hold to avoid merging until we decide what we want to do.
/hold
Do you think this is enough test to include the authfile? I will put this PR on hold to avoid merging until we decide what we want to do.
Yes ! awesome work! thanks!
/unhold
/lgtm
/label acknowledge-critical-fixes-only
/jira-refresh
/jira refresh
/jira refresh
|
GITHUB_ARCHIVE
|
when I wanted to transfer a larger file from my NAS to my FP3 I was surprised by the low speed.
I then found out that it was only connected to my 2,4Ghz Wifi, not the faster 5Ghz one (my FRITZ!Box 7560 router is providing both frequencies, with the same SSID).
Normally the phone should auto-select the better one.
Anyway, after forcing the FP3 to 5Ghz only, I was even more surprised (in a very negative way) that the phone could barely keep the connection stable. Speed was abysmal and the connection was lost every few seconds.
To make a long story short, after playing around with different 5 Ghz Wifi channels (disabling auto-channel selection in the router), it turns out that my FP3 has severe speed problems with anything higher than channel 112 (>5,560 Mhz), channel 116 (5,580 Mhz) being the worst with an instable connection.
I am now using channel 52 and the phone can reach 200 MBit or even higher.
I know that Wifi speed is tricky and depends on whether or not you have many other neighbouring Wifis but I wouldn’t say I am in overcrowded 5Ghz Wifi environment (only counting ~3 foreign 5Ghz Wifis with somewhat stronger signal).
My router actually auto-selected channel 116-128 because it was/is the least used range.
Can anyone confirm this?
PS: My girlfriend owns a FP2 and she has had a bad Wifi experience for a long time. This also seems to have improved after I have switched to channel 52!
As 5 GHz has a short range, you should be able to set it static in your WAP without it causing interference. This depends a bit on your environment, e.g. in a flat you might have some more interference but in general there are going to be like 2 neighbors and you just pick a frequency around it.
Go for a 40 MHz wide range, so that DFS works well.
US has nothing to do with this. In fact, US is very lenient on 5 GHz usage (as is EU and CH). See e.g.
Yeah, there’s your problem. In your WAP you need to enable the preference that it should prefer 5 GHz (“bandwidth steering”), or you put it on a different SSID and prefer the 5 GHz one manually (hence the 2.4GHz one is used as backup).
PS: if you’re transfering large files from your NAS you could consider plugging the FP3 in USB on the NAS.
I don’t think that this is/was “the” problem. All other devices had no problem selecting the 5Ghz Wifi, even on channel 116. I guess the FP3 chose to connect to the 2,4Ghz Wifi because it couldn’t get a stable connection to the 5Ghz Wifi (but now on channel 52 it prefers the 5Ghz Wifi).
My AVM router doesn’t explicitly call it “band steering” but there is an option with a lengthy description that matches what “band steering” is doing, and it is activated!
Sure, there are always other ways. Way less comfortable, though.
FP3 with FRITZ!Box 7590 user here - similar (same) behaviour - Wifi connection very flaky with phone switching between 2.4GHz and 5GHz and ending up at 2.4
Fritz selected the higher channels (116 and up) as it would’ve been alone there. Else there are 10 (including mine) 5GHz networks now sharing the lower ranges.
After pinning the channel to 56 phone and box are good to go.
re:Channelwidth - 160MHz width is NOT allowed
@moldowan: you’re not alone ; thanks for starting this thread. I’ll try to investigate further later this evening.
Nobody suggested a channel width of 160 MHz. You could try between 20/40/80 though. What I’ll try is see if I can put it on 20 or 40 with channel 116. You could play around with it.
Also, the fact that you can see other networks on channels does not mean they’re having a good signal (you should check their RSSI). If their signal is weak, you won’t have much (if any) interference with them.
The Fritz router seems to automatically use 80 Mhz channel width because it always shows a (max) channel usage of 4 (when 116 is selected: 116, 120, 124, 128, or when 52 is selected: 52, 56, 60, 64).
Each of those channels is 20 Mhz wide.
I can confirm the 5Ghz channel problem on the FP3 with the Fritzbox 7590. When using auto channel, the phone would not even connect to the network at all. Switching to channel 52 made it work properly in n+ac mode.
i have been following this discussion since i received my FP 3 in December.
I always had connection issues with my FP3 at home (Fritz.box 6490 Cable @ 5Ghz and a Fritz.Repeater 310). However, i have no issues at work (Eduroam @ 5 Ghz - no clue what specific hardware that is).
My FP 3 does only connect to the Repeater (only supporting 2.4 Ghz) but never to the router itself. disregarding of my distance to the repeater, even when i´m in close proximity to the router.
I tried to change the router settings as described in this threat and a similar one - however no change what so ever - i hoped for the update not change either.
i guess this issue is really related to the WiFi-chip in the FP 3 and the fritz.box as all other devices get along pretty neatly with the 5Ghz network and switching between access points.
|
OPCFW_CODE
|
3 minute reading time
Git and their various hosting platforms support commit signing as an additional step of verification. There seems to be an active debate on whether it should be used regularly, though I’ll describe it on here in case you want to set it up.
You’ll need to have a GPG key already created.
First locate the key you want to sign with
gpg --list-secret-keys --keyid-format SHORT
This will output something like
/home/user/.gnupg/pubring.kbx ------------------------------ sec rsa4096/8294756F 2020-04-11 [SC] [expires: 2021-04-11] KDIAUBEUX837DIU79YHDKAPOEMNCD7123FDAPOI uid [ultimate] Brandon Rozek (Git) ssb rsa4096/9582109R 2020-04-11 [E] [expires: 2021-04-11]
If you want to sign your commits with your main private key, then you can use the main
key’s fingerprint. In the example above, that’s the part that starts with
(Optional) Creating a signing subkey
Alternatively, we can create a subkey specifically for signing commits. To do that, first we need to enter the edit mode for that key.
gpg --edit-key $FINGERPRINT
$FINGERPRINT is the same fingerprint above.
You’ll see something like the following
sec rsa3072/3E40C8DB05FCCFAD created: 2022-12-18 expires: 2023-12-18 usage: SC trust: ultimate validity: ultimate ssb rsa3072/50CC6B37C26F7882 created: 2022-12-18 expires: 2023-12-18 usage: E [ultimate] (1). Brandon Rozek gpg>
From there you type
addkey, which will then present you with some options
Please select what kind of key you want: (3) DSA (sign only) (4) RSA (sign only) (5) Elgamal (encrypt only) (6) RSA (encrypt only) (14) Existing key from card Your selection?
As before, I recommend going with the default signing key option.
In this case it’s
(3) DSA (sign only).
DSA keys may be between 1024 and 3072 bits long. What keysize do you want? (2048)
As before, either stick with the default or tweak based on your personal assesment of risk.
Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years
Same advice as before in terms of key expiration.
I generally stick with
1y. Then, after
confirming the sanity checks you should see the key created.
sec rsa3072/3E40C8DB05FCCFAD created: 2022-12-18 expires: 2023-12-18 usage: SC trust: ultimate validity: ultimate ssb rsa3072/50CC6B37C26F7882 created: 2022-12-18 expires: 2023-12-18 usage: E ssb dsa2048/5C1B6FCA0DABB046 created: 2022-12-18 expires: 2023-12-18 usage: S [ultimate] (1). TestKey
The signing key is denoted by the label
From there we can take its fingerprint, which for
the example above starts with
5C1B and proceed
with the next step.
From here, we need to tell git the key we want to sign with
git config --global user.signingkey $FINGERPRINT
To sign a commit, add a
git commit -S -m "Initial Commit"
To always sign your commits
git config --global commit.gpgsign true
Remember to add your public key to Github, Gitlab, etc. You can get it by
gpg --armor --export $FINGERPRINT
|
OPCFW_CODE
|
+++ This bug was initially created as a clone of Bug #1009941 +++
Description of problem:
When trying to create RAID1 with 3 mount points, /boot, / and swap, all having type RAID1, an exception has occured.
Steps to Reproduce:
1. Start installation
2. Choose manual partitioning
3. Create /boot(500MB), swap(1500MB) and /(rest, size not specified) partitions
Exception has occured.
No exception should occur
--- Additional comment from Vratislav Podzimek on 2013-09-23 15:54:22 EDT ---
13:49:26,105 INFO program: Running... mdadm --create /dev/md/ --run --level=1 --raid-devices=2 --metadata=1.0 --bitmap=internal /dev/vda1 /dev/vdb1
13:49:26,309 INFO program: mdadm: /dev/md/ is an invalid name for an md device (empty!).
13:49:26,311 DEBUG program: Return code: 1
--- Additional comment from David Lehman on 2013-10-01 14:27:11 EDT ---
The user set the name of the md array to "" and instead of setting a reasonable name blivet allowed the array name to be set to the empty string.
It's possible to create a setup in custom spoke that always causes a crash when committing the changes.
A patch exists and has been reviewed FWIW.
making this a public bug so it can be discussed as a proposed blocker for f20 beta
"When using the custom partitioning flow, the installer must be able to: ... Create mount points backed by ext4 partitions, LVM volumes or btrfs volumes, or software RAID arrays at RAID levels 0, 1 and 5 containing ext4 partitions"
Discussed at 2013-10-16 blocker review meeting: http://meetbot.fedoraproject.org/fedora-blocker-review/2013-10-16/f20beta-blocker-review-4.2013-10-16-16.02.log.txt . Accepted as a blocker under the criterion "When using the custom partitioning flow, the installer must be able to: ... Reject or disallow invalid disk and volume configurations without crashing." - https://fedoraproject.org/wiki/Fedora_20_Beta_Release_Criteria#Custom_partitioning
anaconda-20.25.1-1.fc20, python-blivet-0.23.1-1.fc20 has been submitted as an update for Fedora 20.
Package anaconda-20.25.1-1.fc20, python-blivet-0.23.1-1.fc20:
* should fix your issue,
* was pushed to the Fedora 20 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing anaconda-20.25.1-1.fc20 python-blivet-0.23.1-1.fc20'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
Verified fixed with F20 Beta TC5.
anaconda-20.25.1-1.fc20, python-blivet-0.23.1-1.fc20 has been pushed to the Fedora 20 stable repository. If problems still persist, please make note of it in this bug report.
|
OPCFW_CODE
|
Created on 2014-07-20 03:37 by rouilj, last changed 2015-12-02 20:10 by ber.
|mimetypes.log.txt||techtonik, 2015-01-17 18:15|
|msg5120||Author: [hidden] (rouilj)||Date: 2014-07-20 03:37|
From the mailing list, opening an issue as requested by ralf. - HTML attachments should not be served as text/html, see discussion on roundup-users under the title "Spam attack, observations, how to repair". I've committed a partial fix but this needs more work: Browsers seem to be interpreting *any* content-type without a '/' as html. This is actively used by spammers. Since we don't currently have nofollow set for attachments, search engines happily index these pages. There is no issue yet, if someone has time and wants to contribute, making an issue for this would be a welcome contribution. If you do this, please set me on the nosy list. The email thread Ralf referred to is at: http://permalink.gmane.org/gmane.comp.bug-tracking.roundup.user/10733
|msg5123||Author: [hidden] (ezio.melotti)||Date: 2014-07-20 07:44|
FWIW we have been using this detector to force text/plain on (x)html documents: http://hg.python.org/tracker/python-dev/file/6f1b863bd1d8/detectors/no_texthtml.py > Browsers seem to be interpreting *any* content-type without a '/' as > html. This is actively used by spammers. I'm not sure if our detector covers this case, but so far it's been working fine for us.
|msg5125||Author: [hidden] (schlatterbeck)||Date: 2014-07-21 15:43|
Thanks John for creating the issue. On Sun, Jul 20, 2014 at 07:44:24AM +0000, Ezio Melotti wrote: > > FWIW we have been using this detector to force text/plain on (x)html > documents: > http://hg.python.org/tracker/python-dev/file/6f1b863bd1d8/detectors/no_texthtml.py > > > Browsers seem to be interpreting *any* content-type without a '/' as > > html. This is actively used by spammers. > > I'm not sure if our detector covers this case, but so far it's been > working fine for us. Your detector will *not* work for the case we were discussing. Browsers accept any content-type without a '/' as html. And search engines will happily index stuff as html. So as soon as someone sets the type to 'anything' or 'wrzlbrmft' your detector will fail and the file will be interpreted as html. Note the the current fix is very similar to your detector and I didn't know browsers are doing this until it came up here. We are shipping text/html attachments as application/octet-stream unless a config-option is set. This needs to be extended to content types that don't contain a '/'. A whitelist feature would be even better (configurable list of content-types that are *not* mangled when shipping via the web server). Ralf -- Dr. Ralf Schlatterbeck Tel: +43/2243/26465-16 Open Source Consulting www: http://www.runtux.com Reichergasse 131, A-3411 Weidling email: firstname.lastname@example.org allmenda.com member email: email@example.com
|msg5130||Author: [hidden] (ber)||Date: 2014-08-05 13:10|
This is a release blocker.
|msg5179||Author: [hidden] (ber)||Date: 2015-01-05 15:50|
There have been discussions on this on December 2014 on the devel ml. Ralf wrote in the end: """ So we should - check for valid mime-types on incoming attachments (either via web-interface or via mail) Can be realized as an auditor so that users can change the policy here. We should only rewrite clearly invalid mime-types at that point. - have a whitelist of attachments that can safely be shipped to the browser. All mime-types not in the whitelist are shipped as application/octet-stream. My tests indicate that browsers will not display these attachments with this content-type, they only offer to download the file. The original code by Richard attempted this but failed on invalid mime-types for reasons indicated above. I think the hardest part is coming up with a decent whitelist that doesn't miss too many content-types in use out there. But users can reconfigure the whitelist (and give feedback) so we can converge to something usable. """ Should be make seperate issues out of this?
|msg5185||Author: [hidden] (techtonik)||Date: 2015-01-17 12:26|
On Sun, Jul 20, 2014 at 6:37 AM, John Rouillard <firstname.lastname@example.org> wrote: > > I've committed a partial fix but this needs more work: Where is the commit(s) for the history? > Browsers seem to be interpreting *any* content-type without a '/' as > html. Browser Security Handbook confirms this: https://code.google.com/p/browsersec/wiki/Part2#Survey_of_content_sniffing_behaviors
|msg5186||Author: [hidden] (rouilj)||Date: 2015-01-17 15:24|
Hi Anatoly: In message <email@example.com> <firstname.lastname@example.org>, anatoly techtonik writes: >On Sun, Jul 20, 2014 at 6:37 AM, John Rouillard ><email@example.com> wrote: >> >> I've committed a partial fix but this needs more work: > >Where is the commit(s) for the history? I opened the issue on Jul 20, 2014 at Ralf's request. So the commit comment originated from Ralf not me. Maybe check the mercurial log for his commits rather than looking for my name will turn up the commit?
|msg5187||Author: [hidden] (techtonik)||Date: 2015-01-17 17:34|
>>> I've committed a partial fix but this needs more work: >>Where is the commit(s) for the history? Found it. 48d93e98be7b or http://sourceforge.net/p/roundup/code/ci/48d93e98be7b3428785e1087495be7ec2ee81512/ Committing a better fix now.
|msg5188||Author: [hidden] (techtonik)||Date: 2015-01-17 18:15|
Commit 63c31b18b955 fixes this issue: http://sourceforge.net/p/roundup/code/ci/63c31b18b95593865fd8bbd932b0030d0e2110be/ It adds whitelist composed from analysis of attached file and https://mail.python.org/pipermail/tracker-discuss/2015-January/003988.html The whitelist is hardcoded, because adding another option requires a major version bump, because it will lead to removal of allow_html_file, new upgrading.txt docs etc. etc. and due to limited time that we all have I don't want to delay a release.
|msg5394||Author: [hidden] (ber)||Date: 2015-12-02 20:10|
Creating issue2550897 for tracking a better solution, closing the urgend issue here, because it seems to be resolved. (Testing reports appreciated.)
|2015-12-02 20:10:03||ber||set||status: new -> fixed|
messages: + msg5394
messages: + msg5188
|2015-01-17 17:34:38||techtonik||set||messages: + msg5187|
|2015-01-17 15:24:21||rouilj||set||messages: + msg5186|
messages: + msg5185
|2015-01-05 15:52:44||ber||link||issue2550863 dependencies|
|2015-01-05 15:50:38||ber||set||messages: + msg5179|
|2014-12-18 14:44:43||techtonik||set||priority: high -> urgent|
|2014-08-05 13:10:34||ber||set||messages: + msg5130|
|2014-08-05 13:04:08||ber||set||nosy: + ber|
|2014-07-21 15:43:32||schlatterbeck||set||messages: + msg5125|
messages: + msg5123
|
OPCFW_CODE
|
It’s time to rethink Interaction Design.
In the massively augmented Internet of Things world, our current approaches to IxD, AI and Human Centered Design are too solutionistic and task oriented.
My collaborator Betti Marenko and I are proposing Animistic Design as an alternative. We believe that designers can use this new approach to shift focus and create systems that better support creativity, labor, learning, and collaboration.
Not Human Centered Design
This approach means that the human is no longer the center of design.
We envision collections of autonomous, unpredictable devices with distinct personalities and interests, interacting with people and each other. Together they form a creative context. What’s important are the in-between spaces where conversations and productive serendipity happen. Animistic design isn’t about the human or the object or the solution. It’s about designing vibrant, evolving ecologies that benefit people alongside the rest of the ecosystem. Nothing is at the center.
Multiple Perspectives, Shared Data
What you get depends on the personality of each device. And since you have a team of digital colleagues, you’ll get a range of material and perspectives to work with as part of the conversation.
In Animistic Design, your different devices may contribute provocative, helpful, off-topic or quirky content and commentary. You’ll give them access to the media, links and sensor data that you’d like to be a part of the conversation, and they’ll select from, remix, and extend it.
Your “team” creates ongoing conversations informed by your own curated content and enhanced with related external material. This is a critical difference from chat bots and other approaches.
Designing diverse ecologies allows designers to move away from providing single, correct answers. “Actually-smart” humans can work out which threads to follow, make new connections and provide feedback to the system so it further evolves.
Recognizing the affordances and limits of AI, animistic devices can be “dumb-smart,” producing interestingly biased or “not quite right” contributions. The ecology of these contributors thrives because of the diversity of multiple “humble AI” participants, each using different approaches.
Animistic Design provides a framework for designers to work with Artificial Intelligence, and encourages a healthy skepticism and deeper engagement with with AI as a design medium.
People don’t only think inside their brain, but extend their thinking into the environment through the things they interact with.
By assembling an ecology of embodied animistic devices in a workspace, we allow people to leverage distributed cognition, where the mind arises from the interaction between people and their surroundings. One idea or bit of information is here “in” this physical object, and another idea is over there “in” that object.
Through this physical embodiment, people are able to manipulate ideas in a spatial way, working with the ambiguous and diverse character of complex problems instead of reducing it. Look at a typical design studio and you’ll see the walls covered with different materials that stimulate the creative process. What if those things could have a conversation with you?
Colleagues not Slaves
Working with animistic participants requires new social relationships.
Rather than commanding devices, we have a conversation. Instead of treating them as our slaves (with the bad habits that may create), we work with them as colleagues, friends, employees and even adversaries, getting what we need from whoever has a valuable contribution in that moment. Animistic systems can be non-linear, provocative, disruptive and foolish in addition to being on-point and helpful. Which is exactly what is needed for teams that generate new ideas, as Google has recently found.
Native Digital Animism
To be clear, we’re not advocating cute, superficially “alive” systems with faces and fur.
Instead, we believe that a new, native digital animism can be developed that does not rely on skeuomorphs or anthropomorphism. This neo-animism will leverage what computation can contribute, while subtlety cuing the human imagination to enliven these systems. We think it’s perfectly human to knowingly suspend disbelief and embrace an animistic, mythic view of computation (while taking a more rational perspective when that’s useful).
A percieved personality and corresponding behaviors can provide a narrative that helps us understand the intent and state of our complex digital devices as they go about their autonomous ways. Animism is also interesting because it can make us more conscious of suspending disbelief — we know it’s a fiction rather than thinking, falsely, that any computational system is “accurate,” “rational,” and “dispassionate”.
A New Approach to Interaction
If we are to move beyond automation that merely replaces human tasks, and instead build new ways to enhance human creativity, labor, learning, and collaboration, we have to design new interactions.
Conventional human centered design positions digital devices as slaves that do what the master wants. But as designers, we can restructure human-computer relations by seeing that creative work benefits from collaboration, autonomy, serendipity, diversity, risk-taking and imperfection, even for our digital partners. This is Animistic Design.
|
OPCFW_CODE
|
I have to assume it's "Anon" from Anonymous of Holland who conducted this recent interview with Obsidian Entertainment's Chris Avellone, in which the veteran designer tackles questions about Project Eternity's mega dungeon, the opportunities afforded to him during the creation of Star Wars: Knights of the Old Republic II, his proudest development moments, and more:
AoH: Another Project: Eternity related topic that seems to be on everyone's mind is the Mega Dungeon. During the Kickstarter campaign we collectively brought the number of levels to a massive 15, which has me slightly worried it could end up turning into a Shin Megami Tensei-level chore to get through. So far this has also been the subject of very few updates, which I suppose is because Obsidian is still working out the details. Are you personally working on the dungeon as well, and if so, in what capacity? Should we expect the dungeon to be mainly a '˜hack and slash' experience or will there be more to it like in the case of the Castoff's Labyrinth in Torment: Tides of Numenera?
MCA: I am not personally working on the dungeon (we haven't entered the design stage for it yet), and I couldn't give you an exact breakdown of talking vs. fighting. That said, combat and combat resolution is a big part of Eternity, and while conversations and stealth can help set you up in a favorable position when hostilities erupt, talk-intensive encounters are likely to be left for communities, towns, and other areas where it makes more sense; it may be that Od Nua becomes one such location. In the current iteration of the story, the mega dungeon serves a key role and has a lot of interesting mechanics being kicked around for it that I think will be compelling.
But to make this question personal, I love level design. The last time I did area design for Wasteland 2, I enjoyed it, although we have level designers here that are more capable than I could aspire to be (Bobby Null and Jorge Salgado are currently tackling the Vertical Slice levels). Personally, I look at a level design such as Od Nua and see possibilities, not as a chore, and so do our level designers. If I asked someone to design 15 levels of archaic soul-lore-focused insanity and have fun with it. the results I imagine would be great, and it's worked with our other projects where we've given the LDs such freedom ([Fallout: New Vegas DLC] Old World Blues).
AoH: As someone who has been in the industry for such a long time, there must be many things you've done and were extremely proud of, and perhaps just as many things that you regret not getting exactly right or even screwing up completely; what are the first things you think of when you hear this?
MCA: I am proud that I still have time and make time to talk with and respond to aspiring game developers who wants advice or help. That would rarely happen while I was growing up, and I always appreciated the few people that took the time to give me pointers and help me reach my dream job.
I am also proud that I champion speech-related and talking-related pacifist solutions and freedom of character agency in RPGs I work on. I like it when you can join the bad guy, or tell him why his plan is screwed and have him give up or fall apart, and just prove that brains is a viable solution for some character builds.
I am proud of making a game ([Planescape:] Torment) that took everything I hated about RPGs and turned them around and lived to see it appreciated for what it was. On the converse, I am not proud of the fact that that may be the only message I have for the world before I'm out of here. ;)
Also, I'm not happy with scope evaluations of previous projects I've worked on, the last of which was Knights of the Old Republic II. We could have downscaled earlier and not pursued some story elements in that title (cut down the companions, removed the minigames, and recognized that cutscenes are difficult to do in the engine) and made a more complete version. I've worked hard to fix that in titles since, but KOTOR2 still stands out as a game that could have been much better than it was, and I am responsible for that.
|
OPCFW_CODE
|
require "edition"
require_relative 'simple_smart_answer_edition/node'
require_relative 'simple_smart_answer_edition/node/option'
class SimpleSmartAnswerEdition < Edition
include Mongoid::Document
field :body, type: String
embeds_many :nodes, :class_name => "SimpleSmartAnswerEdition::Node"
accepts_nested_attributes_for :nodes, allow_destroy: true
GOVSPEAK_FIELDS = Edition::GOVSPEAK_FIELDS + [:body]
@fields_to_clone = [:body]
def whole_body
body
end
def build_clone(edition_class=nil)
new_edition = super(edition_class)
new_edition.body = self.body
if new_edition.is_a?(SimpleSmartAnswerEdition)
self.nodes.each {|n| new_edition.nodes << n.clone }
end
new_edition
end
# Workaround mongoid conflicting mods error
# See https://github.com/mongoid/mongoid/issues/1219
# Override update_attributes so that nested nodes are updated individually.
# This get around the problem of mongoid issuing a query with conflicting modifications
# to the same document.
alias_method :original_update_attributes, :update_attributes
def update_attributes(attributes)
if nodes_attrs = attributes.delete(:nodes_attributes)
nodes_attrs.each do |index, node_attrs|
if node_id = node_attrs['id']
node = nodes.find(node_id)
if destroy_in_attrs?(node_attrs)
node.destroy
else
node.update_attributes(node_attrs)
end
else
nodes << Node.new(node_attrs) unless destroy_in_attrs?(node_attrs)
end
end
end
original_update_attributes(attributes)
end
def initial_node
self.nodes.first
end
def destroy_in_attrs?(attrs)
attrs['_destroy'] == '1'
end
end
|
STACK_EDU
|
I'm running Red Hat 7.3 with kernel 2.4.18-10 and all errata patches
installed. My system has been running for over three years without any
problems. All I've done to it in that time is add a mirror set of two
80GB drives, a Promise IDE controller, and upgrade Red Hat through the 7.x
series. Right now I have three drives. A 13GB system drive off the
motherboard, and two 80GB drives software mirrored on the Promise
controller. Everything is running ext3.
Recently, one of the drives in the mirror set died. I decided to replace
both the drives so that I could have a mirror set with two identical
drives. That was about two weeks ago. This past Saturday (Sep 28), I
woke up to find the error below on the console. It wasn't written to
/var/log/messages. The machine was locked cold so I had to reboot it.
It ran fine until this morning (Oct 1) when I woke up and turned the
monitor on to see the same message. This time I wrote down everything on
the screen and typed it back in for this email.
Does anyone know how I might go about debugging this? As I said, the only
thing that has changed recently (the last four months) is the removal of
the old mirrored drives and then the addition of these two new drives.
Thanks in advance for any help.
Here is the error message in full:
Assertion failure in __journal_remove_journal_head() at journal.c:1772:
------------[ cut here ]------------
kernel BUG at journal.c:1772!
invalid operand: 0000
nfsd lockd sunrpc autofs tulip ide-cd cdrom usb-uhci usbcore ext3 jbd raid1
EIP: 0010:[<e0818b59>] Not tainted
EIP is at __journal_remove_journal_head [jbd] 0xa9 (2.4.18-10)
eax: 0000001e ebx: d47ebf60 ecx: 00000001 edx: 000028a5
esi: d85d9ee0 edi: d85d9f10 ebp: c2483850 esp: df623e64
ds: 0018 es: 0018 ss: 0018
Process kjournald (pid: 16, stackpage=df623000)
Stack: e081b42f 000006ec d47ebf60 d85d9ee0 e0814533 d47ebf60 d85d9ee0 df622000
00000000 00000000 00000000 00000019 dfc5dba0 d7cdefd0 0005b9ce d29b281e
db6bf760 de956000 c01db16b d4955f80 d4955f20 d4955ec0 d4955e60 d4955e00
Call Trace: [<e081b42f>] .rodata.str1.1 [jdb] 0x4ef
[<e0814533>] journal_commit_transaction [jbd] 0x293
[<c01db16b>] ip_rcv [kernel] 0x31b
[<e0817106>] kjournald [jbd] 0x116
[<e0816fd0>] commit_timeout [jbd] 0x0
[<c0107136>] kernel_thread [kernel] 0x26
[<e0816ff0>] kjournald [jbd] 0x0
Code: 0f 0b 58 5a 39 1e 74 34 68 39 b6 81 e0 68 ed 06 00 00 68 2f
|
OPCFW_CODE
|
There is no doubt that software bugs are time consuming, but can the time investment be drastically decreased? YES! Solving bugs in a parallel programming environment eats up a lot of time because conventional debugging techniques only allow users to control program execution in the forward direction, forcing developers to apply laborious and inefficient methods in their attempt to identify the problem. Introduce the concept of reverse debuggers, and debugging suddenly becomes extremely efficient. Reverse debuggers allow you to work backward from a failure, error, or crash to its root cause and step freely both forwards and backwards through execution without having to restart the program.
According to the recent study at the University of Cambridge, researchers found that when respondents use reverse debugging tools, like Rogue Wave’s ReplayEngine, their debugging time decreased by an average of 26%. Developers, like you, can leverage these time savings to develop additional products, features, and capabilities.
“This research confirms what our customers have been saying for years about the ability of TotalView to drastically reduce development time and costs during the debugging stage of software development,” stated Chris Gottbrath, Rogue Wave Principal Product Manager. “As a market leader in debugging technology, we continually advocate the time and cost-savings benefit of ReplayEngine, Rogue Wave’s reverse debugging feature as a part of TotalView. We are pleased to see robust academic research highlighting this technique as an important opportunity for the global economy.”
Researchers at the University of Cambridge’s Judge Business School conducted a survey which found that when respondents used advanced reverse debugging tools, they spent an average of 26% less time on debugging. Specifically, the time fixing bugs decreased from 25% to 18% and reworking code decreased from 25% to 19% while using reverse debuggers. This means, on the macro-economic level, that reverse debuggers have the potential to save 13% of total programming time, which translates to $41 billion dollars of annual savings to the economy.
As hardware architectures continue to advance from multicore towards manycore, debugging applications developed for these platforms has become exponentially more challenging, with higher levels of concurrency compounding the difficulty of finding and fixing bugs. Conventional debuggers, which allow developers to step through code looking for errors only in the forward direction, are no longer sophisticated enough. When dealing with software that has threads or processes that interact with each other or shared resources, it is common for timing and communications to become out of synch. Non-deterministic bugs, which occur from errors like race conditions and other concurrency issues, may only appear once in ten, a hundred or even more than a thousand executions. If the application runs across large grids, or is shipped to thousands of customers, that rarely occurring bug now appears much more frequently. The ability to simply rewind the application’s execution to determine the exact point of the error is invaluable.
Designed to improve developer productivity, TotalView® simplifies and shortens the process of developing, debugging, and optimizing complex code. It provides a unique combination of capabilities for pinpointing and fixing hard-to-reproduce bugs, memory leaks, and performance issues. TotalView’s reverse debugging capability records the execution history of programs and makes that history available for diagnosis. This new approach—working back from a failure, error, or crash to its root cause—eliminates the need to restart a program repeatedly with different breakpoint locations. The ability to do reverse debugging, stepping freely both forwards and backwards through program execution, drastically reduces the amount of time invested in troubleshooting code.
Read the Reverse Debugging White Paper.
|
OPCFW_CODE
|
In the WSUS administrative console, you will find the Computers category. Here, there are all clients (endpoints) receiving updates via WSUS. Of course, clients can be Windows clients and/or Windows Servers.
Computer Groups are very important in a WSUS infrastructure, as you will be able to deploy only to the groups you want in a more methodical manner and then have the corresponding reports for each group separately.
By default, there are two computer groups, All Computers, and Unassigned Computers. When a client first communicates with the WSUS server, each client is added to the list of both these groups.
Create and manage computer groups
Besides the default groups, you can create as many computer groups as you think it is necessary to manage them more efficiently. The process is very simple.
Right-click All Computers, and then click Add Computer Group.
Enter the name of the group and click the Add button.
As shown in the above image, in my home lab I have created some groups based on the status of the individual client. How to separate your own groups and which clients will be included is at your own discretion.
Of course, in a productive environment, it’s a good idea to create a separate test group with some test clients to test the updates before deploying them across the infrastructure.
To transfer one or more clients to a computer group, right-click the client and then Change Membership.
In the window that appears, select the computer group and click OK.
If you do not see a client in the list, select Any from the Status drop-down menu and click Refresh.
This so easy to handle in a small infrastructure, you create the groups, move the clients where you want and you finished configuring them. But what if there are hundreds of clients and should be added to different groups? Can we automate this?
In this process, we use Client-side targeting. Using Group Policy, we can define in which Computer Groups each client will be assigned to, based on the client-side targeting setting that we set up in the GPO. Let’s look at it in practice.
Configure client-side targeting via Group Policy
Open the Group Policy administration console, create a new policy, and click Edit to configure it. Then browse to the followin path.
Computer Configuration – Policies – Administrative Templates – Windows Components – Windows Update
Here, find the policy named Enable client-side targeting and double click or edit to set it.
Click on Enabled and in the Target group name for this computer field, type the name of the group that will assigned on the WSUS console. You can type more names by separating them with a semicolon.
As the note in the description says, this policy works only if you have enabled the Specify intranet Microsoft service update location policy.
Note: You will first need to create computer groups on the WSUS server and then add the clients through the group policy.
So, by creating different policies with the corresponding computer groups and applying them to separate Active Directory OUs, you can bypass the above manual management of clients in groups. Also, if you add a new client to an OU in the future, then it will take the corresponding policies and updates from the WSUS server without having to do so through the WSUS administrative console.
Finally, whichever method you select for your infrastructure, you will also need to change the corresponding setting in the Options category of WSUS Server and specifically in the Computers section.
In the window that appears, select the client grouping method in Computer Groups and click OK.
|
OPCFW_CODE
|
import {isCenterOfAInsideB, calcDistanceBetweenCenters, getAbsoluteRectNoTransforms, isPointInsideRect, findCenterOfElement} from "./intersection";
import {printDebug, SHADOW_ELEMENT_ATTRIBUTE_NAME} from "../constants";
let dzToShadowIndexToRect;
/**
* Resets the cache that allows for smarter "would be index" resolution. Should be called after every drag operation
*/
export function resetIndexesCache() {
printDebug(() => "resetting indexes cache");
dzToShadowIndexToRect = new Map();
}
resetIndexesCache();
/**
* Resets the cache that allows for smarter "would be index" resolution for a specific dropzone, should be called after the zone was scrolled
* @param {HTMLElement} dz
*/
export function resetIndexesCacheForDz(dz) {
printDebug(() => "resetting indexes cache for dz");
dzToShadowIndexToRect.delete(dz);
}
/**
* Caches the coordinates of the shadow element when it's in a certain index in a certain dropzone.
* Helpful in order to determine "would be index" more effectively
* @param {HTMLElement} dz
* @return {number} - the shadow element index
*/
function cacheShadowRect(dz) {
const shadowElIndex = Array.from(dz.children).findIndex(child => child.getAttribute(SHADOW_ELEMENT_ATTRIBUTE_NAME));
if (shadowElIndex >= 0) {
if (!dzToShadowIndexToRect.has(dz)) {
dzToShadowIndexToRect.set(dz, new Map());
}
dzToShadowIndexToRect.get(dz).set(shadowElIndex, getAbsoluteRectNoTransforms(dz.children[shadowElIndex]));
return shadowElIndex;
}
return undefined;
}
/**
* @typedef {Object} Index
* @property {number} index - the would be index
* @property {boolean} isProximityBased - false if the element is actually over the index, true if it is not over it but this index is the closest
*/
/**
* Find the index for the dragged element in the list it is dragged over
* @param {HTMLElement} floatingAboveEl
* @param {HTMLElement} collectionBelowEl
* @returns {Index|null} - if the element is over the container the Index object otherwise null
*/
export function findWouldBeIndex(floatingAboveEl, collectionBelowEl) {
if (!isCenterOfAInsideB(floatingAboveEl, collectionBelowEl)) {
return null;
}
const children = collectionBelowEl.children;
// the container is empty, floating element should be the first
if (children.length === 0) {
return {index: 0, isProximityBased: true};
}
const shadowElIndex = cacheShadowRect(collectionBelowEl);
// the search could be more efficient but keeping it simple for now
// a possible improvement: pass in the lastIndex it was found in and check there first, then expand from there
for (let i = 0; i < children.length; i++) {
if (isCenterOfAInsideB(floatingAboveEl, children[i])) {
const cachedShadowRect = dzToShadowIndexToRect.has(collectionBelowEl) && dzToShadowIndexToRect.get(collectionBelowEl).get(i);
if (cachedShadowRect) {
if (!isPointInsideRect(findCenterOfElement(floatingAboveEl), cachedShadowRect)) {
return {index: shadowElIndex, isProximityBased: false};
}
}
return {index: i, isProximityBased: false};
}
}
// this can happen if there is space around the children so the floating element has
//entered the container but not any of the children, in this case we will find the nearest child
let minDistanceSoFar = Number.MAX_VALUE;
let indexOfMin = undefined;
// we are checking all of them because we don't know whether we are dealing with a horizontal or vertical container and where the floating element entered from
for (let i = 0; i < children.length; i++) {
const distance = calcDistanceBetweenCenters(floatingAboveEl, children[i]);
if (distance < minDistanceSoFar) {
minDistanceSoFar = distance;
indexOfMin = i;
}
}
return {index: indexOfMin, isProximityBased: true};
}
|
STACK_EDU
|
Process terminated through Debugger inspecting property under Linux
Issue Type: Bug
The included Program has a class property who's getter accesses a postgres database.
Steps:
put a breakpoint on the Console.WriteLine... line
run debugger
if stopped, hover the mouse over the demo symbol
the debugger will try to display the value of demo.ClientId and thereby terminate the debugged process.
Error Message: The target process exited with code 0 while evaluating the function 'crash_demo.Program.CrashDemo.ClientId.get'.The program '[893707] crash-demo.dll' has exited with code 0 (0x0).
might have to do with the native call that occurs while evaluation the prop
using System.Data;
using System;
using Npgsql;
namespace crash_demo
{
class Program
{
static void Main(string[] args)
{
CrashDemo demo = new CrashDemo();
Console.WriteLine("ClientIs is " + demo.ClientId);
}
public class CrashDemo
{
public string ClientId
{
get
{
string clientId = string.Empty;
using (IDbConnection connection = GetConnection())
{
IDbCommand dbCommand = connection.CreateCommand();
dbCommand.CommandType = CommandType.Text;
dbCommand.CommandText = @"select client_id from clients where kdnr = '123456'";
IDataReader reader = dbCommand.ExecuteReader();
if (reader.Read())
{
clientId = (string)reader["client_id"];
}
reader.Dispose();
return clientId;
}
}
}
IDbConnection GetConnection()
{
IDbConnection connection = new NpgsqlConnection();
connection.ConnectionString = "Host=<IP_ADDRESS>;Username=stocks;Database=stocks";
connection.Open();
return connection;
}
}
}
}
VS Code version: Code 1.43.2 (0ba0ca52957102ca3527cf479571617f0de6ed50, 2020-03-24T07:52:11.516Z)
OS version: Linux x64 5.5.10-200.fc31.x86_64
System Info
Item
Value
CPUs
Intel Core Processor (Skylake, IBRS) (3 x 2207)
GPU Status
2d_canvas: unavailable_softwareflash_3d: unavailable_offflash_stage3d: unavailable_offflash_stage3d_baseline: unavailable_offgpu_compositing: unavailable_offmultiple_raster_threads: disabled_offoop_rasterization: unavailable_offprotected_video_decode: unavailable_offrasterization: unavailable_offskia_renderer: disabled_off_okvideo_decode: unavailable_offviz_display_compositor: enabled_onviz_hit_test_surface_layer: disabled_off_okwebgl: enabled_readbackwebgl2: unavailable_off
Load (avg)
1, 1, 1
Memory (System)
7.77GB (3.30GB free)
Process Argv
--no-sandbox
Screen Reader
no
VM
0%
Extensions (8)
Extension
Author (truncated)
Version
vscode-css-formatter
aes
1.0.1
vs-code-xml-format
fab
0.1.5
vscode-firefox-debug
fir
2.7.1
auto-using
Fud
0.7.15
csharp
ms-
1.21.16
mono-debug
ms-
0.15.8
debugger-for-chrome
msj
4.12.6
vscode-gitignore-generator
pio
1.0.1
Here's some more detailed information:
I ran ./vsdbg-ui --server --consoleLogging and pointed to this dbg instance in launch.json.
Scenario1 (already known but with new detail and dbg server log) :
the above mentioned test program
run to Console.WriteLine...
hover mouse over demoClass -> works, content of ClientId is displayed
move mouse away for 2 secs
hover over demoClass again -> debuggee terminates
The log shows that, between the 2nd evaluate request and the debuggee termination, the ResourceManager.dll is loaded.
Scenario2 (modified program, modified test):
include ResourceReader to pull ResourceManager.dll at the beginning
run to the ResourceReader line -> demoClass.ClientId is executed before the breakpoint, all dlls are loaded when the breakpoint is hit
hover over demoClass -> debuggee terminates immediately
The modified main() for scenario2:
static void Main(string[] args)
{
CrashDemo demoClass = new CrashDemo();
Console.WriteLine("ClientIs is " + demoClass.ClientId);
ResourceReader reader = new ResourceReader("");
}
scenario1.log
scenario2.log
|
GITHUB_ARCHIVE
|
is there any tolerance limit we can maintain for the fert material from sales point of view.
like as per material master system bring the weight from master data lets say 10kg, and customer physically weight it, it found as 11kg, so system can check that weight is in tolerance limit.
Looking for the response.
As Per my knowledge the tolerance limit is considered with respect to quantiy.
we have over delivery tolerance and under delivery tolerance.
For example if we consider a sales order with 150 PCA (Piece articles) ordered quantity and under delivery tolerance limit 20% is set then if the user creates the delivery for 120 PCA the status of the SO will be complete instead.
20 % of 150 PCA is 30.
so if you deliver 120 PCA also then the staus of SO will be complete.
Ok, got your point.
but where I should configure this percentage. ?
this % is appear on sales order level, but I want to assign a already decided % to the specific material, not on the sales order directly.
Hopr you understand the scenario.
Looking for your response.
We can maintain tolerance level in master at two place 1. customer master 2. material master
Tolerance level maintained in material master will be applicable in PP module, meaning you have maintained tolerance level in material master as -10% to +10%. Now you create a sales order of 100kg, and based on that create production order of 100kg. but due to some reason you are able to produce 90kg only so based on tolerance maintained in material master you production order will get completed.
Tolerance maintained in customer master is what you can see in the sales order. where you exactly specify over and under delivery tolerance. but it will be customer specific.
But as you want to maintain tolerance for a specific material for delivery, you need to create customer material info record by using VD51, where you can maintain under and over delivery tolerance pertaining to a particular customer and material.
And secondly the tolerance will not only depend on the quantity, but it will depend on the unit of measure maintained in the material master. so, if your unit of measure is KG then you can set tolerance level wrt KG.
Your idea to create customer material information record is near to my solution, but it cant feasible for my client to create seprate record for all the customer and material just for having this information.
tolerance at customer level is useless in my scenario, and PP related also useless.
Pls provide any possible other solution if there is any.
I know this is an old thread but I was looking for a way to get rid of our user exit and found this. We did not want to create thousands of CMIR records so we added ZV45AF13 into MV45AFZZ in FORM userexit_move_field_to_vbap. Basically, we just check MARC for teh reocrd and teh loading group since bulk is the only type that gets an underdelivery tolerance.
IF MARC-LADGR = '0001'.
IF VBAP-UNTTO = 0.
VBAP-UNTTO = '15.0'.
VBAP-UNTTO = '00.0'.
|
OPCFW_CODE
|
What is a Network Topology | Types of network topology
Hello, I’m Sourav Khanna and welcome to the session on network topologies. Today we’re going to discuss what a topology is. Then we’re going to discuss peer to peer and client-server networking. And then we’re going to talk about some common network topologies. And with that, let’s go ahead and begin this session.
What is a topology?
Well, a topology is basically a map that can be used to describe how a network is laid out or how a network functions. A network topology can be described as either being logical or physical. a logical topology describes the theoretical signal path, while a physical topology describes the physical layout of the network.
And you should know that a logical and physical topology doesn’t need to match. And with that, let’s move on to peer to peer versus the client-server networks.
So are these really topologies? No, not really. They don’t describe the signal path or the physical layout of the network. But yes, they are topologies because they do describe how the network function. So that’s why they’re here in this discussion.
Now in a peer to peer topology, the nodes control and grant access to resources on the network. No one node or group of nodes controls access to a single specific type of resource. There’s no real server present. Each node is responsible for the resource, it’s willing to share. No client-server topology differs.
Network resource access is controlled by a central server or servers. A server determines what resources get shared, and who is allowed to use those resources. And even when those resources can be used.
Now, in the small office home office, it’s common to find a hybrid topology. That’s where a combination of peer to peer and client-server networking is, you know, let’s move on to some common network topology models. The first one we’re going to discuss is the bus.
The original Ethernet standard established a bus topology for the network, both logically and physically. And what I mean by a bus topology is the signal travelled along a predetermined path from end to end, it went from one direction to the other direction, and then it could come back.
Now as time went on, the bus developed some mechanical problems that led to the development of different physical topologies. But the logical topology remained the same in order to maintain backward compatibility. So when we discuss Ethernet networks, the logical topology is always a bus topology, while the physical topology can be different.
So let’s talk about the bus. Again, the signal traverses from one end of the network to the other, no break in the line breaks the network and the ends of the bus line needed to be terminated in order to prevent signal bounce.
And what that means is that if there was a break or the ends of the line were not terminated, when the signal got to the end, it would bounce back through and create a storm. In a bus topology, the network cable is the central point.
Now kind of related to the bus is the ring, it’s a bus line with the endpoint connected together, a break in the ring breaks the ring. In a ring topology, it’s common to use two rings or multiple rings that can rotate the safeguards against a break in one ring bringing down the whole network.
Now ring topologies are not very common anymore in the land. But they’re still used in the wide-area network, especially when SONET or SDH is used.
Moving on from the ring we have the star, the nodes radiate out from a central point. Now when a star topology is implemented with a hub, a break in a segment brings down the whole bus, because the hub retransmits out all ports. Now when it’s implemented with a switch of braking, the segment only brings down that segment, it is the most common implementation in the modern LAN. Then there’s the mesh.
A true mesh topology is when all nodes are connected to all other nodes, that’s a full mesh. Now, those aren’t very common because they are expensive and difficult to maintain. But it’s common to find partial meshes. That’s where there are multiple paths between nodes. Now everyone knows at least one partial mesh network and that would be the internet.
Now let’s move on to the point to point topology. That’s where two nodes or systems are connected directly together. Now if you’re talking about two PCs, that’s when they use a crossover cable to create a point to point topology. There’s no central device to manage the connection.
Now, this is still a common topology when implemented across a LAN connection utilizing a T-one line. We also need to discuss point to multipoint. In a point, multipoint topology, a central device controls the paths to all other devices. This differs from the star in that the central device is intelligent.
Now wireless networks often implement point to multipoint topologies. When the wireless access point sends all devices on the network receive the data. But when a device sends its messages only passed along to the destination. It’s also a common topology when implementing a win across a packet switch network.
Now let’s discuss MPLS, MPLS is multiprotocol Label Switching and it is a topology that’s used to replace both frame-relay switching in ATM switching. It’s a topology because it specifies a signal path in layout. MPLS is used to improve the quality of service and flow of network traffic.
It uses label edge routers, le RS which are MPLS labels to incoming packets if they don’t already have them know the Le RS and the labels and pass the packets along to the LSRS Label Switching router, these forward packets based on their MPLS labels.
That’s what makes this a topology.
Now that concludes this session on network topologies. We discussed what a topology is. Then we discussed the differences between peer to peer and client-server networking. And then I brought up some common network topology models that you should know.
|
OPCFW_CODE
|
ABAP password hash algorithms: CLEANUP_PASSWORD_HASH_VALUES
About a month ago, I was questioned about password hash algorithms, as the questioner attended to the SEC105 TechEd session (SAP Runs SAP: How to Hack 95% of all SAP ABAP Systems and How to Protect).
Before answering I decided to go through SAP note 1458262 (ABAP: recommended settings for password hash algorithms).
What I did
First I had a look at table USR02, in client 001:
For testing purposes, I disabled the password for the last user ID in the list:
Then I executed report CLEANUP_PASSWORD_HASH_VALUES:
USR02 after report’s execution:
After setting an initial password for the third user (bottom to top of the list):
And after the password was changed by the user:
My experiment was conducted in a standalone ABAP system. For systems that are part of a CUA, additional steps are required.
The report is very useful, making your system more secure – note that the report recommends an action: enforce the usage of stronger passwords. This will lead to password changes (a SM50 logon trace, per SAP note 495911, will show what happens behind the scenes).
After executing the report, you can find at least 3 “categories” in USR02:
- Password disabled users, with the following entries:
BCODE = 0000000000000000
CODVN = X
PASSCODE = 0000000000000000000000000000000000000000
PWDSALTEDHASH = blank
- Users with PWDSALTEDHASH filled:
BCODE and PASSCODE as above
- Users with PASSCODE filled:
BCODE as above, PWDSALTEDHASH blank and CODVN = F.
For the last case, the code version F means:
suboptimal, records with 7.00/7.01 hash value found
so a hash password is already in place.
It is important to realize that the report solely delete existing (duplicate weaker) hashes but cannot create new ones, for this the report would have to know the passwords.
In case the “strongest” password hash of some users are passcode then this is because of the time when they were entered the system created those.
If you would like to have only pwdsaltedhash passwords, then the system administrator would have to provide new passwords for all users with codvn=F.
There is no automated change for this, as the password is unknown.
SEC105 – SAP Runs SAP: How to Hack 95% of all SAP ABAP Systems and How to Protect
SAP note 2467 – Password rules and preventing incorrect logons
SAP note 495911 – Logon problem trace analysis
SAP note 862989 – New password rules as of SAP NetWeaver 2004s (NW ABAP 7.0)
SAP note 1023437 – ABAP syst: Downwardly incompatible passwords (since NW2004s)
SAP note 1237762 – ABAP systems: Protection against password hash attacks
SAP note 1458262 – ABAP: recommended settings for password hash algorithms
|
OPCFW_CODE
|
"""
蘋果日報搜尋測試
"""
import unittest
from twnews.search import NewsSearch
#@unittest.skip
class TestAppleDaily(unittest.TestCase):
"""
蘋果日報搜尋測試
"""
def setUp(self):
self.keyword = '上吊'
self.nsearch = NewsSearch('appledaily', limit=10)
def test_01_filter_title(self):
"""
測試蘋果日報搜尋
"""
results = self.nsearch.by_keyword(self.keyword, title_only=True).to_dict_list()
for topic in results:
if '上吊' not in topic['title']:
self.fail('標題必須含有 "上吊"')
def test_02_search_and_soup(self):
"""
測試蘋果日報搜尋+分解
"""
nsoups = self.nsearch.by_keyword(self.keyword).to_soup_list()
for nsoup in nsoups:
if nsoup.contents() is None:
# 因為 home.appledaily.com.tw 的 SSL 憑證有問題,忽略這個因素造成的錯誤
if not nsoup.path.startswith('https://home.appledaily.com.tw'):
msg = '內文不可為 None, URL={}'.format(nsoup.path)
self.fail(msg)
|
STACK_EDU
|
In your organization, what term do you use to refer to the smallest set of features that would constitute a viable release to your users? In other words, how do you refer to the smallest set of must-have features you could possibly release to your users that they would actually pay you for or—if deployed internally—use?
In this posting I explain why I believe MVP and MMF are not good choices for the smallest set of must-have features that would constitute a release, and why I use the term MRF (Minimum Releasable Features).
If you have taken one of my classes or read my Essential Scrum book then you know how important I think it is for people who collaborate to share a common vocabulary of terms (see this blog post). When teams don’t take the time to purposefully define their terms, you end up with two people on the same team having a conversation using the same term but talking right past each other! Worse yet, they actually think they are in agreement!
And, even people who don’t collaborate on the same effort still need a common vocabulary to communicate effectively. I recently attended a conference where I listened to a conversation between two very knowledgeable agile practitioners. It soon became apparent to me that each was using the term MVP completely differently, which was causing both of them quite a bit of confusion. Their exchange actually prompted me to write this posting!
MVP (Minimum Viable Product)
Let’s start with MVP or Minimum Viable Product. This term has a lot of baggage associated with it and is commonly used in at least two different ways.
Some people use MVP to mean the most pared down version of a product that can still be released. You know, the smallest, cheapest, most basic version of your final product that you want to get into the marketplace quickly. The primary implication being that the MVP has enough value that at least some people are willing to buy it and use it. It is also hoped that the MVP will be useful for acquiring knowledge that can be used to guide future development. In other words, you put the pared-down version into the marketplace to better understand your customer needs before you invest in building the full-featured, gold-plated version of the product.
Other people use MVP to mean the simplest experiment you can run to validate the current most-important customer hypothesis. Or as the Lean Startup community says, the simplest thing you can do, with the least effort, to collect the maximum amount of validated learning about the customer. Notice that the focus of this usage is on the learning, and not whether you actually delivered a product to customers that they could use. For example, an MVP could be a landing page that you use to validate your value proposition and even your pricing. You wouldn’t call your landing page your product, or even a bare-bones version of your product. Instead, the landing-page MVP is a quick and inexpensive way to validate important assumptions you are making about your customer.
So, to summarize, some people use MVP to mean the minimum feature set that they can deliver of their product, with a secondary focus on learning from that product before larger investments are made. Others use MVP to refer to a knowledge-acquisition technique, and whether the MVP itself is actually usable by the customer is of secondary interest (interesting to the extent that the MVP might need to be usable by the customer in order to acquire validated learning).
Because of these two common established usages, I have personally avoided using MVP to mean the minimum set of releasable features that a customer would actually pay to use since a meaningful subset of people would be confused by this usage.
MMF (Minimum Marketable Feature)
So what about using MMF (Minimum Marketable Feature) to mean the smallest releasable set of features? MMF is a term that was coined by Mark Denne and Jane Cleland-Huang in their 2003 book Software by Numbers. Their definition of MMF is a chunk of functionality that delivers a subset of the customer’s requirements, and that is capable of returning value to the customer when released as an independent entity.
One way to think about an MMF is that it represents the must-have subset of a larger customer feature. In other words it is the minimum part of a larger feature that would be useful / valuable to a customer without the nice-to-have parts of the same feature.
For example, if you were developing a product that allowed customers to pay with gift certificates, the MMF of paying with a gift certificate might only allow the customer to use one gift certificate per purchase. Later on you might implement the nice-to-have part of the pay-with-a-gift-certificate feature by letting the customer pay for one order with multiple gift certificates.
An important observation is that just building the MMF for paying with a gift certificate would not, on its own, constitute a releasable product. We would also need to have the MMF for searching for the product you want to buy (e.g., there might be 50 ways of searching for a product and we can go to market with just the single most popular way), the MMF for carting the product (assuming we were implementing a shopping cart), as well as the MMF for paying with a gift certificate.
So in many cases, a single MMF is not actually the minimum set of features we can include in a release. Typically a release is made up of a collection of MMFs that must be delivered together in order for the users to get any value.
Conclusion – Use MRF (Minimum Releasable Features)
Given the two established and somewhat contradictory uses of MVP and the fact that often a single MMF is not a sufficient release, I prefer to use another term that I have labeled: MRF (Minimum Releasable Features). I feel that this term more clearly reflects the intent of delivering the absolute minimum set of must-have features that can be released to our users and still be usable.
Have the various meanings of MMF, MVP, or MRF caused you and your team any difficulties? I’d love to hear what you have done to ensure you are all talking about the same thing.
|
OPCFW_CODE
|
from itertools import combinations
from multiprocessing import Pipe
from multiprocessing.connection import Connection
from typing import Any, Dict, List, Tuple
class Pipeline(object):
def __init__(self, parent: str, child: str, conn: Connection) -> None:
self.__parent = parent
self.__child = child
self.__conn = conn
return None
def __del__(self) -> None:
self.__conn.close()
return None
def send(self, val: Any) -> None:
self.__conn.send(val)
return None
def receive(self) -> Any:
return self.__conn.recv()
def poll(self, timeout: float) -> bool:
return self.__conn.poll(timeout)
class LocalNetwork(object):
def __init__(self, parent: str, children: List[str],
conns: List[Pipeline]) -> None:
self.__parent = parent
self.__children = children
self.__lnet: Dict[str, Pipeline] = dict(zip(children, conns))
return None
def send_to(self, child: str, val: Any) -> None:
self.__lnet[child].send(val)
return None
def send_all(self, val: Any) -> None:
[conn.send(val) for conn in self.__lnet.values()]
return None
def receive_from(self, child: str) -> Any:
return self.__lnet[child].receive()
def poll_from(self, child: str, timeout: float) -> bool:
return self.__lnet[child].poll(timeout)
class GlobalNetwork(object):
def __init__(self, process_ids: List[str]) -> None:
comb_ids = list(combinations(process_ids, 2))
pair_procs: List[Tuple[str, str]] = []
conns: List[Connection] = []
for comb in comb_ids:
pair_procs.extend([comb, tuple(reversed(comb))])
conns.extend(Pipe())
pipes = [
Pipeline(pair[0], pair[1], conn)
for pair, conn in zip(pair_procs, conns)
]
self.__proc_ids = process_ids
self.__pairs = pair_procs
self.__net: Dict[Tuple[str, str],
Pipeline] = dict(zip(pair_procs, pipes))
return None
def get_pipeline_between(self, parent: str, child: str) -> Pipeline:
return self.__net[(parent, child)]
def get_local_net(self, parent: str) -> LocalNetwork:
pairs = [pair for pair in self.__net.keys() if pair[0] == parent]
pipes = [self.__net[pair] for pair in pairs]
children = [pair[1] for pair in pairs]
return LocalNetwork(parent, children, pipes)
|
STACK_EDU
|
How to serialize to inline table vs. dotted table?
It seems that nested Tables are always serialized to dotted tables. e.g.:
let in_string = "[dependencies]\nhdk = { git = \"https://github.com/foo/bar\", branch = \"develop\" }"
let val = toml::from_str::<Value>(in_string)?;
println!("{:?}", toml::to_string(&val));
Even though the input string is an inline table, the output string uses a dotted table.
Is there no way to specify inline table when serializing?
Currently this crate unfortunately doesn't support a lot of control over serialization and formatting, it just guarantees it emits some valid toml document, but not necessarily the prettiest
Support for serializing inline tables would be great, because I suspect that is why I'm getting a ValueAfterTable error when serializing an object like:
#[derive(Deserialize, Serialize, Debug)]
#[serde(untagged)]
pub enum StringOrCmd {
S(String),
Cmd { cmd: Vec<String> },
}
#[derive(Deserialize, Serialize, Debug)]
pub struct BackendConfig {
pub client_id: StringOrCmd,
pub client_secret: StringOrCmd,
}
#[derive(Deserialize, Serialize, Debug)]
pub struct Config {
backend: BackendConfig,
}
This should serialize to something like:
[backend]
client_id = { cmd = ["cat", ".clientid"] }
client_secret = "somesecret"
But becomes invalid TOML if not using inline table format:
[backend]
[backend.client_id]
cmd = ["cat", ".clientid"]
# oops!
client_secret = "somesecret"
The solution would be either supporting inline tables or using extra logic to ensure all literal types under a table are written out before any nested table types.
I just got hit by that ValueAfterTable problem and it'd be really nice if it worked.
Just for reference, cargo-edit uses toml_edit for parsing and writing cargo.toml files. Could be useful to see how they do it for anyone planning on implementing this for toml-rs.
I'm definitely on team "would love to have an inline table serialization option!" here :)
:+1: Hitting this issue with cargo-outdated when trying to format Cargo.toml for cargo API. This causes interference with the namespaced-features feature.
https://github.com/alexcrichton/toml-rs/issues/406
I don't want to be one of those people who beats a dead horse, but I would like to show a potential use case for such a feature. Currently I have some structures that I want to let a user edit during runtime that come out a little unreadable:
# Edit your contact information, or add any as necessary
[value."LABEL ME 1"]
type = 'Address'
[value."LABEL ME 1".content]
name = 'Earth'
[value."LABEL ME 2"]
type = 'Email'
content = 'An email address. E.g<EMAIL_ADDRESS>
[value."LABEL ME 3"]
type = 'Phone'
content = 'A phone number. E.g. `600-555-5555`'
I would like that to look like this, which would probably be easier to read:
# Edit your contact information, or add any as necessary
[value]
"LABEL ME 1" = {type='Address', content={name='Earth'}}
"LABEL ME 2" = {type='Email', content='An email address. E.g<EMAIL_ADDRESS>"LABEL ME 3" = {type='Phone', content='A phone number. E.g. `600-555-5555`'}
Or anything else that keeps the label in one place (my main concern is a user forgetting to edit the label in the content section, or putting a typo there).
There's a round-trip test of an inline table at https://github.com/alexcrichton/toml-rs/blob/60b874308e6792a73cc00517a60bbef60a12e3cc/test-suite/tests/valid/example-v0.4.0.toml#L34-L46= which makes me think this must be possible somehow with the right structures. Maybe it only works with the toml-rs internal types at the moment and not with types defined outside the toml-rs library.
Maintenance of this crate has moved to the https://github.com/toml-rs/toml repo. As a heads up, we plan to move toml to be on top of toml_edit, see https://github.com/toml-rs/toml/issues/340.
Closing this out. If this is still a problem, feel free to recreate this issue in the new repo.
|
GITHUB_ARCHIVE
|
package pattern
import (
"fmt"
"strings"
)
// Token is a type for specifying the tokens used in noise.
type Token string
const (
// TokenE is the e from noise specs.
TokenE = Token("e")
// TokenS is the s from noise specs.
TokenS = Token("s")
// TokenEe is the ee from noise specs.
TokenEe = Token("ee")
// TokenEs is the es from noise specs.
TokenEs = Token("es")
// TokenSe is the se from noise specs.
TokenSe = Token("se")
// TokenSs is the ss from noise specs.
TokenSs = Token("ss")
// TokenPsk is the psk from noise specs.
TokenPsk = Token("psk")
// TokenInitiator indicates the message is sent from initiator to responder.
TokenInitiator = Token("->")
// TokenResponder indicates the message is sent from responder to initiator.
TokenResponder = Token("<-")
tokenInvalid = Token("invalid")
preMessageIndicator = "..."
errConsecutiveTokens = "cannot have two consecutive line using %s"
errRepeatedTokens = "token '%s' appeared more than once"
errMissingToken = "need token %s before %s"
errMustBeInitiator = "the first line must be from initiator"
errInvalidLine = "line '%s' is invalid"
errPskNotAllowed = "psk is not allowed"
errTooManyTokens = "pre-message cannot have more then 2 tokens"
errTokenNotAllowed = "%s is not allowed in pre-message"
)
type patternLine []Token
type pattern []patternLine
func errInvalidPattern(format string, a ...interface{}) error {
prefix := "Invalid pattern: "
return fmt.Errorf(prefix+format, a...)
}
// parseMessageLine takes a line of messages, check its validation, and split it
// into a slice of token strings. For example,
// "-> e, s" becomes ["->", "e", "s"]
func parseMessageLine(l string) (patternLine, error) {
pl := patternLine{}
tokens := strings.Split(l, " ")
// a valid line must have at least two items
if len(tokens) < 2 {
return nil, errInvalidPattern(errInvalidLine, l)
}
// the first item of a line must be a direction, left or right.
t, err := parseTokenFromString(tokens[0])
if err != nil {
return nil, err
}
if t != TokenResponder && t != TokenInitiator {
return nil, errInvalidPattern(errInvalidLine, l)
}
pl = append(pl, t)
for _, token := range tokens[1:] {
// "e," becomes "e"
tokenTrimmed := strings.Trim(token, " ,")
t, err := parseTokenFromString(tokenTrimmed)
if err != nil {
return nil, err
}
pl = append(pl, t)
}
return pl, nil
}
// parseTokenFromString turns a token string into a token type.
func parseTokenFromString(s string) (Token, error) {
switch s {
case "e":
return TokenE, nil
case "s":
return TokenS, nil
case "ee":
return TokenEe, nil
case "es":
return TokenEs, nil
case "se":
return TokenSe, nil
case "ss":
return TokenSs, nil
case "->":
return TokenInitiator, nil
case "<-":
return TokenResponder, nil
case "psk":
return TokenPsk, nil
default:
return tokenInvalid, fmt.Errorf("token %s is invalid", s)
}
}
// tokenize takes a message string and turns it into a pattern. For example, it
// takes,
// -> e
// <- e, ee
// and returns, a pattern, which is []patternline. A patternline is []Token.
func tokenize(ms string, pre bool) (pattern, error) {
p := pattern{}
// remove message whitespaces
ms = strings.TrimSpace(ms)
// break the message line by line, a message,
// -> e
// <- e, ee
// becomes,
// "-> e" and "<- e, ee"
for _, line := range strings.Split(ms, "\n") {
// remove line whitespaces
line = strings.TrimSpace(line)
// "<- e, ee" now becomes, "<-", "e", "ee"
pl, err := parseMessageLine(line)
if err != nil {
return nil, err
}
p = append(p, pl)
}
// validate pattern based on it's pre-message or not
if pre {
if err := validatePrePattern(p); err != nil {
return nil, err
}
return p, nil
}
if err := validatePattern(p); err != nil {
return nil, err
}
return p, nil
}
// tokenizePreMessage takes a pre-message string and turns it into tokens. A
// valid pre-message must pass the following checks,
// - it can only have a line of "e", "s", or "e, s", no "psk" is allowed.
func validatePrePattern(pl pattern) error {
isInitiator := pl[0][0] == TokenInitiator
prevIsInitiator := !isInitiator
for _, line := range pl {
isInitiator = line[0] == TokenInitiator
// In additional to the rules specified in the noise protocol, it's also
// required that the initiator/responder cannot send two consecutive
// messages, they must alternate. For instance,
// -> s
// <- s
// is a legal patter, while,
// -> s
// -> s
// is not legal as they are both from the initiator(->)
if prevIsInitiator == isInitiator {
return errInvalidPattern(errConsecutiveTokens, line[0])
}
prevIsInitiator = isInitiator
// pre-message can have at most 2 tokens, e and s, plus a direction
// token, "->" or "<-", so max is 3.
if len(line) > 3 {
return errInvalidPattern(errTooManyTokens)
}
// check the tokens
tokens := line[1:]
if len(tokens) == 1 {
t := tokens[0]
switch t {
case TokenE:
case TokenS:
default:
return errInvalidPattern(errTokenNotAllowed, t)
}
}
if len(tokens) == 2 {
if tokens[0] != TokenE || tokens[1] != TokenS {
return errInvalidPattern(errTokenNotAllowed, tokens)
}
}
}
return nil
}
// validatePattern implements the rules specified in the noise specs, which,
// 1. Parties must not send their static public key or ephemeral public key
// more than once per handshake.
// 2. Parties must not perform a DH calculation more than once per handshake
// (i.e. there must be no more than one occurrence of "ee", "es", "se", or
// "ss" per handshake).
// 3. After an "se" token, the initiator must not send a handshake payload or
// transport payload unless there has also been an "ee" token.
// 4. After an "ss" token, the initiator must not send a handshake payload or
// transport payload unless there has also been an "es" token.
// 5. After an "es" token, the responder must not send a handshake payload or
// transport payload unless there has also been an "ee" token.
// 6. After an "ss" token, the responder must not send a handshake payload or
// transport payload unless there has also been an "se" token.
func validatePattern(pl pattern) error {
tokenSeen := map[Token]int{}
// checks that the first line in the message is an initiator token.
isInitiator := pl[0][0] == TokenInitiator
if isInitiator != true {
return errInvalidPattern(errMustBeInitiator)
}
prevIsInitiator := !isInitiator
for _, line := range pl {
count := map[Token]int{}
isInitiator = line[0] == TokenInitiator
// In additional to the rules specified in the noise protocol, it's also
// required that the initiator/responder cannot send two consecutive
// messages, they must alternate. For instance,
// -> e, s
// <- e, ee, se
// is a legal patter, while,
// -> e, s
// -> e, ee, se
// is not legal as they are both from the initiator(->)
if prevIsInitiator == isInitiator {
return errInvalidPattern(errConsecutiveTokens, line[0])
}
prevIsInitiator = isInitiator
// TODO: psk token can only be at the begining or end of a line
for _, token := range line[1:] {
// check rule 1 and 2 on each pattern line. Not that a "psk" token
// is allowed to appear one or more times in a handshake pattern.
if token != TokenPsk && count[token] > 0 {
return errInvalidPattern(errRepeatedTokens, token)
}
count[token]++
tokenSeen[token]++
if isInitiator {
// check rule 3 and 4
switch token {
case TokenSe:
// must have seen an "ee" token before
if tokenSeen[TokenEe] < 1 {
return errInvalidPattern(
errMissingToken, TokenEe, TokenSe)
}
case TokenSs:
// must have seen an "es" token before
if tokenSeen[TokenEs] < 1 {
return errInvalidPattern(
errMissingToken, TokenEs, TokenSs)
}
}
} else {
// check rule 5 and 6
switch token {
case TokenEs:
// must have seen an "ee" token before
if tokenSeen[TokenEe] < 1 {
return errInvalidPattern(
errMissingToken, TokenEe, TokenEs)
}
case TokenSs:
// must have seen an "se" token before
if tokenSeen[TokenSe] < 1 {
return errInvalidPattern(
errMissingToken, TokenSe, TokenSs)
}
}
}
}
}
return nil
}
|
STACK_EDU
|
dynamic seccomp policies (using BPF filters)
|| ||Will Drewry <firstname.lastname@example.org> |
|| ||email@example.com |
|| ||[RFC,PATCH 0/2] dynamic seccomp policies (using BPF filters) |
|| ||Wed, 11 Jan 2012 11:25:08 -0600|
|| ||firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
|| ||Article, Thread
The goal of the patchset is straightforward:
To provide a means of reducing the kernel attack surface.
In practice, this is done at the primary kernel ABI: system calls.
Achieving this goal will address the needs expressed by many systems
qemu/kvm, openssh, vsftpd, lxc, and chromium and chromium os (me).
While system call filtering has been attempted many times, I hope that
this approach shows more promise. It works as described below and in
the patch series.
A userland task may call prctl(PR_ATTACH_SECCOMP_FILTER) to attach a
BPF program to itself. Once attached, all system calls made by the
task will be evaluated by the BPF program prior to being accepted.
Evaluation is done by executing the BPF program over the struct
user_regs_state for the process.
!! If you don't care about background or reasoning, stop reading !!
Past attempts have used:
- bitmap of system call numbers evaluated by seccomp (or tracehooks)
- standalone data structures and extra entry hooks
(cgroups syscall, systrace)
- a collection of ftrace filter strings evaluated by seccomp
- perf_event hackery to allow process termination when an event matches
In addition to the publicly posted approaches, I've personally attempted
continued deeper integration with ftrace along a number of different
lines (lead up to that can be found here). What inspired the current
patch series was a number of realizations:
1. Userland knows its ABI - that's how it made the system calls in the
2. We already exposed a filtering system to userland processes in the
form of BPF and there is continued focus on optimizing evaluation
even after so many years.
3. System call filtering policies should not expose
time-of-check-time-of-use (TOCTOU) vulnerable interfaces but should
expose all the information that may be relevant to a syscall policy
The prior seccomp-ftrace implementations struggled with very
fixable challenges in ftrace: incomplete syscall coverage,
mismatched syscall names versus unistd, incomplete arch coverage,
etc. These challenges may all be fixed with some time and effort, and
potentially, even closer integration. I explored a number of
alternative approaches from making system call tracepoints per-thread
and "active" to adding a new less-perf-oriented system call.
In the process of experimentation, a number of things became clear:
- perf/ftrace system-wide analysis goals don't align with lightweight
- ftrace/perf ABI doesn't mix well with security policy enforcement,
reduced attack surface environments, or keeping users from specifing
vulnerable filtering policies.
- other than system calls, tracepoints aren't considered ABI-stable.
The core focus of ftrace and perf is to support system-wide
performance and debugging tracing. Despite its amazing flexibility,
there are tradeoffs that are made to provide efficient system-wide
behavior that are less efficient at a per-thread level. For instance,
system call tracepoints are global. It is possible to make them
per-thread (since they use a TIF anyway). However, doing so would mean
that a system-wide system call analysis would require one trace event
per thread rather than one total. It's possible to alleviate that pain,
but that in turn requires more bookkeeping (global versus local
tracepoint registrations mapping to the thread info flag).
Another example is the ftrace ABI. Both the debugfs entry point with
unstable event ids and the perf-oriented perf_event_open(2) are not
suitable to providing a subsystem which is meant to reduce the attack
surface -- much less avoid maintainer flame wars :) The third aspect of
its ABI was also concerning and hints at yet-another-potential struggle.
The ftrace filter language happily accepts globbing and string matching.
This is excellent for tracing, but horrible for system call
interposition. If, despite warning, a user decides that blocking a
system call based on a string is what they want, they can do it. The
result is that their policy may be bypassed due to a time of check, time
of use race. While addressable, it would mean that the filtering engine
would need to allow operation filtering or offer a "secure" subset.
A side challenge that emerged from the desire to enable tracing to act
as a security policy mechanism was the ability to enact policy over more
than just the system calls. While this would be doable if all
tracepoints became active, there is a fundamental problem in that very
little, if any, tracepoints aside from system calls can be considered
stable. If a subset were to emerge as stable, there is still the
challenge of enacting security policy in parallel with tracing policy.
In an example patch where security policy logic was added to
perf_event_open(2), the basics of the system worked, but enforcement of
the security policy was simplistic and intertwined with a large number
of event attributes that were meaningless or altered the behavior.
At every turn, it appears that the tracing infrastructure was unsuited
for being used for attack surface reduction or as a larger security
subsystem on its own. It is well suited for feeding a policy
enforcement mechanism (like seccomp), but not for letting the logic
co-exist. It doesn't mean that it has security problems, just that
there will be a continued struggle between having a really good perf
system and and really good kernel attack surface reduction system if
they were merged. While there may be some distant vision where the
apparent struggle does not exist, I don't see how it would be reached.
Of course, anything is possible with unlimited time. :)
That said, much of that discussion is history and to fill in some of the
gaps since I posted the last ftrace-based patches. This patch series
should stand on its own as both straightforward and effective. In my
opinion, this is the direction I should have taken before I sent my
I am looking forward to any and all feedback - thanks!
Will Drewry (3):
seccomp_filters: dynamic system call filtering using BPF programs
Documentation/prctl/seccomp_filter.txt | 179 ++++++++
fs/exec.c | 5 +
include/linux/prctl.h | 3 +
include/linux/seccomp.h | 70 +++++-
kernel/Makefile | 1 +
kernel/fork.c | 4 +
kernel/seccomp.c | 8 +
kernel/seccomp_filter.c | 639 +++++++++++++++++++++++++++++++++++++++++++++++
kernel/sys.c | 4 +
security/Kconfig | 12 +
9 files changed, 743 insertions(+), 3 deletions(-)
create mode 100644 kernel/seccomp_filter.c
create mode 100644 Documentation/prctl/seccomp_filter.txt
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
|
OPCFW_CODE
|
API testing is a method of testing the quality, performance, security and reliability of an API to help locate bugs and verify that an application behaves as expected.
API testing is one of the most effective ways to protect an API from vulnerabilities. It’s a method of testing the quality, performance, security and reliability of an API to help locate bugs and verify that an application behaves as expected.
Testing and validating APIs are becoming increasingly important in the software development lifecycle, as API testing can significantly reduce the time required for integration, validation and verification efforts. In this article, we’ll cover all you need to know about API testing, its importance and how to do it with recommended tools.
An API (Application Programming Interface) is a set of functions and procedures that allow applications and software to communicate with each other. They allow developers and third-party users to interact with data stored on another device or system (e.g., databases) remotely through an internet connection.
APIs are the backbone of modern software development used by many websites to connect to each other, allowing developers to build their own tools on top of them. They can also be used for adding features and functionality to your application by using third-party libraries or frameworks which will further improve its performance and usability.
API testing is a type of software testing that involves testing an API directly to verify and validate its functionality, mechanics, reliability, performance and security. The goal of API testing is to automate test scenarios that would require manual execution by developers or testers. These scenarios might include:
API testing is a form of black box testing, because it’s used to test the internal workings of an API and determine whether it can be implemented properly, without the need for user interaction or knowledge about how the system works.
API testing helps:
While API testing has a number of advantages, it also has drawbacks:
An API testing approach should start with a precisely defined program scope and a thorough comprehension of how the API is intended to function. Besides, API testing is not just about making sure that your code works correctly, but also ensuring that it is robust and reliable.
Testing teams should think about the following issues:
Additionally, tests should be built to make sure users can’t have unanticipated effects on the application, the API can handle the expected user load, and the API is compatible with a variety of browsers and devices. Such testing determines how user-friendly and functional the API is and how effectively the API integrates with other platforms.
There are many types of API tests, but the most common ones are:
Functional testing: A functional test is used to verify whether all the functions in a particular API work. It ensures that an API provides the appropriate response to a given request.
Load testing: This kind of API test evaluates how an API responds to a lot of queries in a short amount of time.
Security testing: These tests evaluate an API’s ability to respond to and fend off online threats.
Penetration testing: This involves users who are unfamiliar with the API attempting to attack the API, allowing testers to evaluate the threat vector from an unbiased standpoint.
Runtime and error detection testing: These API tests often concentrate on monitoring, execution flaws, resource leaks or error detection and are intended to assess how well the API actually performs.
Fuzz testing: In this kind of API test, a lot of randomly generated requests are sent to see if your API answers erroneously, handles any inputs incorrectly, or crashes.
Validation testing: These tests are carried out to confirm the functionality and behavior of the API.
REST (Representational State Transfer) API testing is an open-source web automation testing technique for testing RESTful APIs for online applications. The goal of REST API testing is to submit multiple HTTP/S queries and record the responses to determine whether or not the REST API is functioning properly. The GET, POST, PUT and DELETE methods are used to test the REST API.
On the other hand, SOAP (Simple Object Access Protocol) was created as an intermediary language to make data sharing between applications written in various platforms and programming languages simple.
With the right API testing tools and processes, you can build a robust test suite that covers all of your application’s features and functions. These API testing tools range from paid subscriptions to open-source offerings. These tools include:
SoapUI: This tool focuses on evaluating SOAP and REST API functionality as well as web services. It’s an excellent tool for preventing API attacks as it has an easy-to-use graphical user interface, offers enterprise-class capability, and makes it simple to create and execute automated functional, regression and load tests.
Salt Security: Salt offers security for the APIs at the core of every modern application. The Salt platform automatically detects APIs and exposes sensitive data using a cloud-scale big data engine powered by their AI and ML techniques, detects and prevents attackers, tests and scans APIs throughout the build phase, and provides remediation insights learned in runtime to help dev teams improve their API security posture.
JMeter by Apache: This is a free, open-source load and functional API testing tool used to test a wide range of protocols and measure performance. Request chaining is supported by Apache JMeter, which may be used to test dynamic web applications as well as static and dynamic resources.
Apigee: This is a Google Cloud API testing tool that specializes in API performance testing. In order to provide data feeds and enhance communication capabilities, API gateways are used to connect websites and services that employ RESTful APIs.
Test Studio: This API testing tool helps to test RESTful APIs using a low-code, automated method, and it utilizes API calls to enhance automated functional UI tests.
Swagger UI: This open-source tool helps generate a web page listing all the used APIs. It allows for development across the whole API lifecycle, from design and documentation to testing and deployment.
Postman: This is a Google Chrome app that automates and verifies API testing. To build better APIs more quickly, Postman enhances collaboration and streamlines each stage of the API lifecycle.
OWASP ZAP: This is a free, open-source penetration testing tool called Zed Attack Proxy (ZAP) maintained by the Open Web Application Security Project (OWASP). Finding vulnerabilities in web applications is made simple with this integrated penetration testing tool.
Using simple-to-create and maintain API tests, Test Studio enables teams to increase their functional testing efforts regardless of testing seniority or expertise.
With Test Studio, you can:
John Iwuozor is a freelance writer for cybersecurity and B2B SaaS brands. He has written for a host of top brands, the likes of ForbesAdvisor, Technologyadvice and Tripwire, among others. He’s an avid chess player and loves exploring new domains.
Subscribe to be the first to get our expert-written articles and tutorials for developers!
All fields are required
|
OPCFW_CODE
|
First post, by harry250
Hi DOSBox team! I'm really getting into using DOSbox for running all my DOS games collection.
I've got Simon Hradecky's Airline Simulator 2 in "bigbox" on original CD running dual screen via two DOSBox 0.74.3 sessions connected together using the built-in IPX master to client feature - on Windows 10, i7 8700, 16gb RAM, AMD r9 390 card - but Ican't get both sessions running together perfectly.
Here's what's happening at the moment:
When I load two independent DOSBox sessions running Airline Sim 2 independently on each, the FPS are fluid - over 100 FPS on each.
But when the two DOSBox sessions are connected together via the Built in DOSBox IPX feature, adding lines like this to the bottom of the master and client DOSBox configs:
mount c c:\4\AS2
jemmex load frame=e000
mount c C:\5\AS2
IPXNET CONNECT 127.0.0.1
jemmex load frame=e000
JemmEx is the fix to get this sim running in DOSBox 0.73.
The "master" DOSBox window runs perfectly fluid, 200+ FPS, but there is a noticeable lag in FPS on the client side: only 15 FPS, that sort of thing.
All the client is doing in the sim is slewing from receiving update position heading attitude data from the master, and the master controls how the sim flies.
i.e. you fly it from the master side and the client side follows the master.
When I emailed Simon Hradecky recently about there being a lag in FPS on the client side when using 2 DOSBox sessions connected together using IPX, he said the following :
I used the IPX driver always under pure DOS, never under Windows, so I have no experience with the Windows implementation. It is well possible, that the Windows implementation of IPX is causing issues here.
The master never computes more than 18 updates per second, regardless how fast the frame rate becomes. All updates are calculated with the PC timer tick (DOS timer), that runs at 18.2 Hertz.
It is possible however, that the slave on Windows has an issue on receiving the broadcasts, gets temporarily stuck and therefore loses the visible frame rate. This could have even worsened with the later Windows versions, as the IPX support has basically ceased decades ago in favor of IP (and due to lack of a reasonable IP implementation on DOS I never made an IP driver), and I recall tremendous problems even with Novell's Netware under Windows (in particular as soon as Windows became 64 bit, then none of the clients worked anymore and data corruption was normal).
Is there an easy fix for this in Windows 10? i.e. turn something off in Windows 10 Networking, that sort of thing?
or could the DOSBox IPX feature be made faster in later releases of DOSBox?
PS. I recently bought on GOG the Falcon3 collection. It was on sale for only 3.75, and I've been using that included version of DOSBox in the Falcon3 GOG installer to run some of my other DOS games and have found it runs games smoother and faster than the DOSBox releases in the Download tab on this site.
Was this a specially done custom version of DOSBox tuned to make Falcon3 GOG run smoothly?
|
OPCFW_CODE
|
I think you only need two kinds of people to create a technology hub: rich people and nerds. They're the limiting reagents in the reaction that produces startups, because they're the only ones present when startups get started. Everyone else will move.
Observation bears this out: within the US, towns have become startup hubs if and only if they have both rich people and nerds. Few startups happen in Miami, for example, because although it's full of rich people, it has few nerds. It's not the kind of place nerds like.
Whereas Pittsburgh has the opposite problem: plenty of nerds, but no rich people. The top US Computer Science departments are said to be MIT, Stanford, Berkeley, and Carnegie-Mellon. MIT yielded Route 128. Stanford and Berkeley yielded Silicon Valley. But Carnegie-Mellon? The record skips at that point. Lower down the list, the University of Washington yielded a high-tech community in Seattle, and the University of Texas at Austin yielded one in Austin. But what happened in Pittsburgh? And in Ithaca, home of Cornell, which is also high on the list?
I grew up in Pittsburgh and went to college at Cornell, so I can answer for both. The weather is terrible, particularly in winter, and there's no interesting old city to make up for it, as there is in Boston. Rich people don't want to live in Pittsburgh or Ithaca. So while there are plenty of hackers who could start startups, there's no one to invest in them.
There's more, and Paul is both right and wrong in a lot of big ways. Some of his errors have to do with how he models the Silicon Valley. Doesn't anyone remember the role of the defense industry out there? Lockheed, Ford, and Westinghouse created and then unleashed a huge pool of engineering talent in the South Bay. The Shockley/Fairchild story that he repeats is only part of the nerd narrative. And eccentricity requires wealth -- he's right about that -- but it's not just investing wealth that matters; it's wealth across the spectrum. Rich people matter not because they can put money in new ventures; rich people matter because someone has to pay the taxes that support the towns where the nerds want to play. Read John Markoff's What the Doormouse Said, and Jeff Goodell's Sunnyvale to get a sense of the suburban wealth that subsidized the high tech revolution. What happened in Northern California is that a wealthy culture was willing to tolerate and subsidize eccentricity at low levels. Over time, that eccentricity built on itself, to the point that it overthrew both the cultural and economic establishment.
The lesson for Pittsburgh, then, isn't quite the one that Paul Graham draws. He's pretty skeptical about Pittsburgh's prospects, because he doesn't see the money. But Pittsburgh has plenty of wealth. It doesn't have investing wealth, necessarily (it has capital, but not enough risk capital). It does have enough economic wealth spread generally across the region that Pittsburgh is a comfortable place to live. There are a lot of nice suburbs here, and a lot of pleasant and even wealthy neighborhoods in the City of Pittsburgh. What that wealth doesn't do particularly well, though, is tolerate the cultural and economic eccentricity that urban and university communities inevitably produce. To turn a rhetorical phrase, on the whole, Pittsburgh's taxpaying and grant-giving wealthy aren't happy about subsidizing subversion. If you want to be skeptical about Pittsburgh, be skeptical that that culture will ever change. If you want to be optimistic, even if you don't want Pittsburgh to be another Silicon Valley, be optimistic that Pittsburgh will turn out to be willing to subsidize eccentricity, and willing to give that eccentricity some space to breathe.
Graham has promised Part Two shortly. Stay tuned.
|
OPCFW_CODE
|
A prominent player in the philosophy of computer science is that international organization that includes the two disciplines in its title. A history on the website of the organization (IACAP), traces its development. Past conference programs, held by organizations later merged into IACAP, exhibit leadership in the nascent philosophical computing topics of the First Millenium– logic, software principles, the computational model, and knowledge representation. Later talks deal with methods such as connectionism, neural networks, and natural language processing, reflecting the history of artificial intelligence and philosophy of mind. Social and ethical considerations appear as well, in discussions of norms of communication and agency, open source, and open access.
The regional conferences were merged into single annual event that alternates between North America and Europe. Recent papers show a broader scope of inquiry, into the nature of information, digital media and teaching methods, business processes, network structures and behavior, issues of autonomy and control in hardware and software, and the obligations imposed by society’s expectations of digital services.
What drew current philosophers of computer science into these subjects, and what do they think that the study can offer to computer science itself? Let’s hear from a leader in the field. Don Berkich, an associate professor of Philosophy at Texas A&M Corpus Christi, is the Executive Director of the International Association for Computing and Philosophy.
A View from Don Berkich
Should computer scientists and philosophers bother with one another?
After all, philosophers are frequently dismissed outside the discipline in the States as meddlesome, ill-informed, and technically unsophisticated dilettantes whose harping on about this or that arcane, readily-resolvable issue is safely ignored. Recall in this light Feynman’s famous injunction, "Philosophy of science is about as useful to scientists as ornithology is to birds."
Conversely, computer scientists are viewed askance outside their own narrow discipline as puffed-up, obsessive technicians whose work may be useful (or dangerous) in the way most of merely applied mathematics can be useful (or dangerous), but it is hardly of any fundamental interest. Indeed, the few debates that do filter out are amusingly trivial: Indent style? Emacs vs. vi?
To be sure, these are caricatures. Though, are they altogether unfair?
I don’t think so–yet all the more reason for conversation between the philosophers and computer scientists, or so my experience suggests. Let me explain.
- As an avid open-source advocate, amateur coder, CLI fan, and ‘that weird guy in the department who uses something other than Windows or Mac’ (Debian, since Bo–version 1.3–was released in ’97), I’ve learned firsthand that philosophical speculation absent the hard work of developing computational models lacks credence. Thus, I argue, computational plausibility is an important test for philosophical speculation (in the philosophy of mind, in particular); many philosophers are likewise sympathetic to this kind of computational, as opposed to logical or empirical, positivism.
- As a philosopher, I’ve long been interested in computational models of agency. I’m grateful to Rod Grupen’s willingness to welcome me to meetings of the Laboratory of Perceptual Robotics at UMass-Amherst and to sit as an outside member on my dissertation committee. What I discovered then has never since been gainsaid: investigations in computer science into perception, say, or agency, are no less philosophical for being grounded in the demands of implementation–and all the more rigorous and fascinating for being thus grounded.
- As Executive Director of the International Association for Computing and Philosophy, I’ve seen firsthand over the years how discussions between philosophers, computer scientists, and cognitive scientists challenge, fascinate, and illuminate in ways richly edifying to all involved, discussions which have given rise to such interdisciplinary journals as Minds and Machines and Philosophy and Technology.
- As a member of the philosophy faculty at Texas A&M University-Corpus Christi, I’ve had the opportunity to develop a signature senior-level course on the foundations of artificial intelligence and cognitive science, Minds and Machines, which explores a semester-long dialogue between AI-optimists, backed by their impressive accomplishments, and AI-pessimists, backed by their daunting skeptical challenges.
- Finally, I am currently and for the first time offering a new course, ‘Introduction to Philosophy for Computer Science Majors.’ In light of my computer science colleagues’ complaints that their students struggle with logic and problem-solving, the course will emphasize puzzles, paradoxes, and formal logic to help the students develop while perhaps interesting the students in a side of philosophy they likely don’t know.
For my part, in future posts I plan to present philosophical challenges and pose foundational questions to the computer science community in hopes of promoting fruitful conversation.
Those Philosophical Challenges and Foundational Questions, and More
Certainly, as Berkich says, considerations of implementation strengthen rather than weaken philosophical work in artificial intelligence and other research based on human cognition. In my own case, the intriguing question was not exactly "How can philosophy use the computer?" but rather "What can philosophy tell us about what goes on in computer science?" (Hill 2016). Of the new products of the digital age, we can ask, if they are objects, what is their ontology; we can ask, if they invoke creative processes, what are their aesthetics; we can ask, if they yield knowledge, what is their epistemological status? Selected issues from that realm will be covered in future postings in this space. Firm grounding in real computer science, as Berkich states, can only enhance the integrity of such inquiries.
Hill, Robin K., 2016, A Call for More Philosophy in the Philosophy of Computer Science, APA Newsletter on Philosophy and Computers, Spring 2016 (15:2)
International Association for Computing and Philosophy, http://iacap.org
Robin K. Hill is adjunct professor in the Department of Philosophy, and in the Wyoming Institute for Humanities Research, of the University of Wyoming. She has been a member of ACM since 1978.
|
OPCFW_CODE
|
Page 1 of 1
quadrium | flame oly uses one processing core?
Posted: Sun May 20, 2007 7:12 pm
Is it true that quadrium | flame oly uses one processing core?
I have a brandnew 8 core intel mac pro.
Render jobs only run on one core at a time and switch from core to core not makeing use of the full rendering potetial of the machine. Is this due to software implementation? If so, are you planning on altering the code to use multiple processors in the future?
Posted: Mon May 21, 2007 9:20 am
This is true.
In quadrium2, every pixel is completely independent, and can be generated as needed, so we can separate the image up into different areas and let multiple threads generate each area and then stitch them all together.
In flame, however, the only way to know the value of any given pixel is to generate the entire image (i.e. apply all the iterations) and look at the final result. So if we were to break the image up, we'd basically just generate the entire image twice, which would be no performance improvement. This is because the whole algorithm is iterative (the "I" in "IFS"), and there is no way to "skip ahead".
Basically, the renderer generates a point (figures out its location and color) and then merges that with a running total (a histogram of each pixel, showing how often it is "hit"), and then repeats (using the location of the last pixel to do the next one), until it generates as many points as you've asked for. It then converts those histograms to RGB values to display it.
I've tried doing things like separating the "generate the points" from the "merge with histograms" but it turns out that overhead of communicating safely between the two results in making the whole thing actually run slower (and use two cores to do so, instead of just one). I've also looked into generating and merging multiple sets of iterations in parallel, but again, the overhead of synchronizing them destroyed any advantage. The fundamental problems is that this is just not an algorithm designed to be parallelizable. (I've got a few other options to play with, but they will require a significantly higher amount of memory, and I don't expect miracles out of them)
Future versions will, however, be able to render movies using multiple cores (since each frame can be rendered in parallel and independently), but that won't help rendering in general.
Re: quadrium | flame oly uses one processing core?
Posted: Tue Aug 11, 2009 9:04 pm
I use Apophysis running on Windows 7RC which is running in a VM (Parallels). My machine is an 8-core Mac Pro. The VM is set to 4-procs. Apophysis is set to use 4 threads.
When rendering in Apophysis 4-cores are maxed => very short render times. q | f of course uses only one. So it appears that multi-processor use is certainly possible.
Apophysis and q | f are two different programs but they are essentially doing the same thing. Apophysis is open source. I wonder if a look at the source would give some ideas on implementation of multithreading in q | f?
With Snow Leopard just around the corner it would seem that there will be opportunities to bring the quadrium family of programs into the 64-bit multi-processing world.
BTW I'm so tempted to get q | flame for my Touch... OK I just bought it. The store has the wrong program description displayed... something about the locations of heavenly bodies...?????
|
OPCFW_CODE
|
- Assistant Professor, University of Notre Dame
- Postdoctoral Fellow, Yale University and Howard Hughes Medical Institute
- Ph.D. in Biochemistry, The Ohio State University
- B.S. in Chemistry and Biological Sciences, Wright State University
- NIH Pathway to Independence Award (K99/R00)
- American Cancer Society Postdoctoral Fellowship
- OSU Presidential Fellowship
- American Heart Association Predoctoral Fellowship
Structural, Biochemical & Cellular Roles of RNA Triple Helices
RNA structure is largely viewed as being single stranded or double stranded, although triple-stranded RNA structures were deduced to form in test tubes almost 60 years ago. Despite this early discovery, only four examples of RNA triple helices have been validated in eukaryotic cellular RNAs. The long-term goal of the Brown laboratory is to understand the structural, biochemical, and cellular roles of RNA triple helices using the MALAT1 triple helix as a model. This triple helix forms at the 3' end of the long noncoding RNA, MALAT1 (metastasis-associated lung adenocarcinoma transcript 1). This triple helix forms when a U-rich internal loop of a stem-loop structure binds and sequesters a downstream 3-terminal A-rich tract. This unique triple-helical structure, composed of nine U•A-U triples separated by a C+•G-C triple and C-G doublet, protects MALAT1 from an uncharacterized rapid nuclear RNA pathway.
The fundamental structural and biochemical properties of RNA triple helices remain to be rigorously characterized. The Brown laboratory is interested in several key questions. Do proteins bind specifically to the MALAT1 triple helix? Is there an undiscovered class of triple-stranded RNA binding proteins? How does the cell degrade a highly stable triple-helical RNA structure? What is the relative stability of canonical (U•A-U and C•G-C) versus non-canonical base triples? Can successive non-canonical base triples form a stable triple helix? What are the structural parameters of an ideal RNA triple helix? What is the folding pathway of an RNA triple helix? What other RNA triple helices exist in mammalian cells? To investigate these questions, we are currently using a variety of approaches, including X-ray crystallography, cell-based assays, molecular biology, classical biochemistry and high-throughput methods.
Studying the MALAT1 triple helix will advance our understanding of cancer. MALAT1 is upregulated in multiple types of cancer and promotes tumor growth by affecting proliferation, invasion, and metastasis. Importantly, the region of MALAT1 that is sufficient to induce oncogenic activities includes the triple helix. Our work shows that the MALAT1 triple helix is required for MALAT1 accumulation; therefore, we are currently exploring whether the triple helix plays a direct role in mediating oncogenic activities beyond its function as an RNA stability element.
- Kunkler, C. N., Hulewicz, J. P., Hickman, S. C., Wang, M. C., McCown, P. J., Brown, J. A. "Stability of an RNA•DNA-DNA triple helix depends on base triplet composition and length of the RNA third strand" 2019 Nucleic Acids Research, 47 (14), pp. 7213-7222. DOI:10.1093/nar/gkz573.
- Ruszkowska, A., Ruszkowski, M., Dauter, Z., Brown, J.A. "Structural insights into the RNA methyltransferase domain of METTL16" 2018 Scientific Reports, 8 (1), 5311. DOI: 10.1038/s41598-018-23608-8
- Brown, J.A., Kinzig, C.G., Degregorio, S.J., Steitz, J.A. "Methyltransferase-like protein 16 binds the 3'-terminal triple helix of MALAT1 long noncoding RNA" 2016 Proceedings of the National Academy of Sciences of the United States of America, 113 (49), pp. 14013-14018. DOI: 10.1073/pnas.1614759113
- Brown, J.A., Kinzig, C.G., Degregorio, S.J., Steitz, J.A. "Hoogsteen-position pyrimidines promote the stability and function of the MALAT1 RNA triple helix" 2016 RNA, 22 (5), pp. 743-749. DOI: 10.1261/rna.055707.115
- Brown, J.A., Steitz, J.A. "Intronless β-globin reporter: A tool for studying nuclear RNA stability elements" 2016 Methods in Molecular Biology, 1428, pp. 77-92. DOI: 10.1007/978-1-4939-3625-0_5
- Brown, J.A., Bulkley, D., Wang, J., Valenstein, M.L., Yario, T.A., Steitz, T.A., Steitz, J.A. "Structural insights into the stabilization of MALAT1 noncoding RNA by a bipartite triple helix" 2014 Nature Structural and Molecular Biology, 21 (7), pp. 633-640. DOI: 10.1038/nsmb.2844
|
OPCFW_CODE
|
‘My dashboards’ is a powerful feature we’ve introduced to EMI for registered users (you can register by clicking 'register' at the top of the screen or here).
The ‘my dashboards’ feature allows you to construct your own dashboards that are compiled from report views that you save. Developing your own customised dashboards on EMI allows you to quickly gain updated insights from the reports you’re especially interested in. The feature can also be used as a favourites list for frequently visited reports. These options make EMI easier and more efficient for you to use.
To add a report to a dashboard, select the button found at the bottom of any report. This will bring up a dialog box of settings.
To get started quickly, you can simply accept the defaults, enter a dashboard name, and hit add. Many report views can be added to a single dashboard.
There are four key features to consider when adding a report:
- How would you like to handle date parameters?
- How do you want to handle other parameters?
- Which dashboard do you want to add the report to?
- What do you want to call each report view on your dashboard?
1. How would you like to handle date parameters?
Three alternative situations for handling date parameters are explained below. These depend on the report and report parameter settings you have selected.
A. For reports with a single date parameter there are two options:
- Use the default report date. This ensures that the report view in your dashboard will update as the dates change on the underlying report. Reports generally default to the latest date.
- Save the selected report date to permanently capture a view of a point of time.
Example: A report showing monthly snapshot data
Try it yourself here.
B. For reports with time range parameters there are three options:
- Use the default report dates. This ensures that the report view in your dashboard will update as the date range changes on the underlying report.
- Save the selected quick range to always show this range (e.g. latest seven days) whenever the dashboard is viewed. This option is only available if the report being added is currently set to a quick range through the date selection parameter. This option is great when you want a dashboard to automatically stay up-to-date and the default date range does not work for you.
- Save the selected report date range to capture a view of a point in time. This option allows you to save a report illustrating a market event or to save a specific view in your dashboard.
Tip: using dashboards to capture a view of a point in time works great if you have a temporary dashboard set up while you are looking into a topic, exploring a market event or developing a story.
Example: A report showing the latest seven days, i.e. the ‘latest seven days’ was selected when viewing the report before adding the report to the dashboard.
Try it yourself here.
C. For time series report with a time comparison added there are two options:
The time comparisons feature is an advanced feature that enables you to make time comparisons of a single time series for set lengths of time, e.g. compare this week to last week. These options are only relevant if you have added a time comparison to a report.
- Save the selected quick range and relative time comparisons. An example might be adding a report view that always shows the latest seven days, compared to the two preceding weeks.
- Save the selected report date range and time comparisons to capture a view of a point in time including the comparisons.
Example: A report showing a time comparison of the last seven days with the preceding two weeks
Try it for yourself here.
2. How do you want to handle the other parameters?
Handling the other report parameters is simpler than the date parameters. You have three options:
- Use default report parameters
- Save the parameters excluding the series filter selection (if available). This option allows you to save the parameters you have changed while letting the data and default chart settings drive the series filters.
Tip: combining this selection with a quick range selection means that your dashboard report won’t filter out the entry of new participants (if participants are the series in the report).
- Save the parameters including the series filter selection. This option will only be available when you have changed the series filter parameter.
Example: A report where the series filter has been changed
3. Which dashboard do you want to add the report to? (or create a new dashboard)
You need to select a dashboard to add the report view to or you can add a name for a new dashboard. You are able to create up to five dashboards. The number of reports (or report views) in a dashboard is not currently limited although only eight reports are shown on each page.
4. What do you want the report view to be called in your dashboard?
Report names on EMI are rather generic to accommodate the range of views and insights that can be seen by changing report parameters. For example, showing a trend line for all of New Zealand or just one specific region.
You can add a specific name for the report and parameter selections you've saved to make it more meaningful to you in your dashboard. This naming feature is especially useful when you have added multiple views from the same report to the same dashboard (for example, perhaps illustrating activity in different regions).
|
OPCFW_CODE
|
Defer JS delaying javascript on Safari Desktop and Mobile
Hi there,
I got reports from iOS users that the time until the page's javascript is loaded did increase with the use of defer js.
So I looked at the loading order and found this:
Chrome, Firefox, Opera, Vivaldi:
document
stylesheet
logo
jquery
main js
images & further js & google font
Safari:
document
stylesheet
svg images
logo
images
google font
jquery
main js
further images and js
This is delaying the main javascript - which is required for many features of the site - by multiple seconds with safari (depending on the internet connection).
Is this a known issue? (I couldn't find anything similar)
Any help would be great. Live example: //hayday-forum.de Main js: WCF.Combined.min.js pagespeed_version: apache stable latest (installed 2 days ago from official website, unfortunately it doesn't tell the version in the filename)
defer_javascript is probably not a good choice for sites where the main
content is loaded from JS. It makes sense when the content is rendered as
HTML, and additional functionality (e..g menu actions, ads, analytics) is
loaded via JS.
The main surprise for me is that this isn't a problem for you on the other
browsers besides Safari.
What was your motivation for using defer_javascript on this site?
On Sun, Nov 13, 2016 at 12:13 AM, Slind14<EMAIL_ADDRESS>wrote:
Hi there,
I got reports from iOS users that the time until the page's javascript is
loaded did increase with the use of defer js.
So I looked at the loading order and found this:
Chrome, Firefox, Opera, Vivaldi:
document
stylesheet
logo
jquery
main js
images & further js & google font
Safari:
document
stylesheet
svg images
logo
images
google font
jquery
main js
further images and js
This is delaying the main javascript - which is required for many features
of the site - by multiple seconds with safari (depending on the internet
connection).
Is this a known issue? (I couldn't find anything similar)
Any help would be great. Live example: //hayday-forum.de Main js:
WCF.Combined.min.js
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/pagespeed/mod_pagespeed/issues/1431, or mute the
thread
https://github.com/notifications/unsubscribe-auth/AB2kPZeIDr6OmQw1YvbfAzKyZtikhS6eks5q9pxzgaJpZM4KwnJF
.
The motivation is bad third party javascript which needs to run sync in the middle of the page. With defer it is being moved to the end of the page load without issues while trying to invoke it with document.write my self caused the script to overwrite the entire page.
The WCF.Combined.min.js is not a requirement. It's mainly in the navigational aspects. But if the page is loaded visually 4 seconds before the js it causes issues for the user.
I'm really wondering why the script is loaded right after css and its dependencies on most browsers, that's like the perfect loading order, but on safari it fails.
I don't see how this is a bug in defer_javascript. However, if you are looking for a way to selectively delay the loading of third party JS, you should probably consider the 'async' attribute on the script tag. See https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script for details.
Thank you. Unfortunately "async" and "defer" cause issues with the script. I resolved it now by excluding the specific script manually.
|
GITHUB_ARCHIVE
|
A few months ago I wrote a short blog post about nodejs, the technology webinos is based on, and it’s module system NPM. I questioned why, considering how lucrative it might be for a malware writer, was there no malicious software listed in the NPM repository?
After receiving some great comments – thanks to all who got in touch with me – I decided to write a follow-up article expanding further on this question.
Many people pointed out (as I did in the original article) that nodejs and the NPM repository were just one example of the problem. The same could be argued of Ruby Gems, PHP, or any package management and web framework system. Indeed, one could ask the same question of the Debian package management system, which has been going for much longer and has many more packages available. This is all absolutely true. It seems that malware is extraordinarily rare on most package management systems. Even the Google Play Market for Android – much maligned for containing malware – actually has a good track record* considering the number of applications available. Compared to downloading files from arbitrary websites and shareware, app stores appear to be significantly more reliable. Why is that? What makes a package management systems such a good security mechanism, despite the apparent lack of oversight or significant security controls?
App stores and package repositories do offer some accountability, as well as revocation and, potentially, mediation. Developers have to ‘sign up’ in one way or another to an app store or package repository, and can then be held accountable when their submission is found to be harmful. This is only of limited value in most repositories, as new user accounts can easily be created, but makes a reputation for good software something worth protecting. Furthermore, if the repository charges money to create an account or upload a new module, then this money is lost when the terms of service are violated. Revocation is probably more important: when an application or module is identified to be malicious, it can be removed before doing further damage to other people. This provides no protection for early adopters, but the more cautious certainly benefit. Of course, the benefit is lessened if automatic updates are allowed, as malware can be disguised as legitimate software until it reaches maximum market penetration, and then can be modified to exploit the end user. Mediation is often cited as an advantage of a package repository or app store, and the reason why Apple have avoided significant security incidents. Unfortunately, unless you have Apple-like resources, this is unlikely to be cost effective. One of the few exceptions to this rule is third-party mobile app stores. There is something of a malware issue on Android app stores outside the US and western Europe (essentially anywhere that Google Play doesn’t dominate).
Where does nodejs fit in? When I wrote the last article, there was no evidence of any malware, but also no evidence of any accountability, mediation or revocation mechanisms. The NPM repository offers no way to report bad modules and accountability appears minimal. Nodejs certainly isn’t special, but should malware writers take the time to focus on it, it could be especially bad.
Perhaps it is more interesting to ask whether it is worth writing malware for nodejs? As I mentioned in my first post, the potential targets are certainly rich enough. Almost all server-side web apps will have some privileged access to data or services. While good system design can mitigate some of the threats — I received several comments suggesting that TLS connections should not be terminated by the nodejs app, but should use a reverse proxy such as Nginx instead, to protect private keys — it can’t mitigate them all. Furthermore, the reality is likely to be far from the best practice: I suspect many people do run nodejs applications with super-user privileges, and without trying to protect against these kind of attacks.
If malware is a possibility (and is still potentially lucrative) then why does it appear to be so scarce? Having thought about this problem for a while, I can only assume that it is because such malware does not offer a high reward/effort ratio. There is too much low-hanging fruit elsewhere (phishing end users, for example) to make this particular avenue of attack worthwhile. For one thing, it is probably too hard to automate using server-side malware after it has been deployed. There are thousands of ways in which each endpoint might be configured, and no obvious single set of malicious actions that might be performed. As a targeted attack it would be effective (an attacker could spend time exploiting the specific machine it was installed on) but this would rely upon the target making use of this module, which is relatively unlikely. This kind of attack can only work if planned a long way in advance, and if it can be effectively deployed and run on multiple targets at the same time. Indeed, for this kind of malware to be successful, the author would have to develop a popular nodejs module in the first place, which is a significant amount of work. If the exploit is then enabled by an update, it would then be reliant on developers frequently updating their modules. On balance, therefore, it seems that nodejs malware simply isn’t worth the effort.
A more worrying possibility was exposed by Adam Baldwin recently. He discovered that a CSRF flaw on npmjs potentially allowed anyone to update any package. This dramatically shifts the reward/effort ratio. If an attacker simply has to repackage a legitimate module with a few additional malicious components, and then wait for developers to run updates, the impact could be enormous. For example, the ‘underscore’ module has been downloaded nearly 10,000 times in the last day, and has over 1000 other modules depending on it. There will be hundreds of developers who may not even realize that they are using this module. If it was updated to include, say, virus.js with a more malicious payload, thousands of production boxes might be at risk.
However, the main threat from nodejs packages is currently badly written software rather than malicious software. Of the 32 thousand nodejs modules on NPM, it’s not crazy to suggest that several hundred will have exploitable vulnerabilities. Simple attacks, such as malicious content injection, will be present in many modules. Native modules may suffer from all the classic exploits found in any other piece of software. To combat this, the Node Security project aims to audit and inspect every nodejs module and provide “advisories, issues and pull requests so modules get fixed”. This is a laudable goal, although the sheer amount of effort required is daunting. Inspecting this number of modules seems impractical and it is inevitable that only a subset of security issues will be identified. There is a good opportunity for program analysis: if modules can be assessed in an automated way to identify common flaws then the general level of security can be raised without too high a cost. Indeed, the idea of automatic exploit generation is something that The University of Oxford has an interest in, and was recently discussed at the Crest Open Workshop on Malware (PDF presentation by Daniel Kroening). There are many questions as to how such automated vulnerability analysis should be performed in a responsible manner, but the technology exists to make a big difference to a very large number of systems.
One potential system-level mitigation for vulnerable nodejs modules is the use of least-privilege permissions like those found in mobile applications. If developers could intentionally limit themselves to only certain built-in nodejs modules (e.g., just “URL” and “HTTP” modules) then this would greatly reduce the impact of a vulnerability being exploited. Of course, it would not help those modules that have “file” or “process” permissions, but it would aid the auditing effort as only privileged modules would need extensive review. I expect that the same security controls we see employed to protect user-focused applications will slowly begin to find their way into developer-focused tools.
Finally, I’ll end this somewhat rambling blog post by proposing that this is a very promising area of research. With NPM we have a huge, open source repository of source code that has not, in general, been subject to much security analysis. This is ripe for studies on how good developers are at implementing secure software, how effective particular mitigations might be, and identification of the most common mistakes.
- Caveat: we don’t know how much mobile malware exists, but evidence suggests it does not affect many US users of Android or iOS.
|
OPCFW_CODE
|
Do I need to calibrate my mirror lens?
First off, I'm sorry for the slide-show, but I've had this lens a long time now and I'm fed up with it. A while back I tried adjusting this ring to either extreme and couldn't see a difference.
No matter what I do I cannot get SHARP pictures with this lens. The view-finder will confirm I'm in focus. Even if I use my camera's live-view I cannot get very sharp pictures with it.
As I said I've had this lens a while and in that time I can now just "lift the camera" to my face and look at stuff, I've also gained quite a steady hand, I can usually go as low as 1/400 with 1500mm and 1/160 for 500mm before my movement becomes a problem. To eliminate that though, the 1500mm pictures were shot at 1/4000th and the 500mm at 1/1000 (the day wasn't so bright)
First picture, here's what I think to be some sort of calibration ring, when I first tried it (according to a guide I found online) I tried either extreme, middle, you name it but couldn't really see a difference. However it seems to move back and forth – so probably calibrates something!
The ring I am referring to here moves if the screw just to the left of "FEET" is loosened, not sure about the collar behind that.
500mm pictures
1/1000th shutter
This is probably the best one, in the view-finder I could see the circles that indicate a good focus, the rangefinder dot was on, however if you zoom there's not much detail there!
1/4000th shutter
With both of these, I took a series of pictures moving the focus ring ever so slightly, these are the best ones.
1500mm pictures
All 1/4000th
As you can see, they're VERY fuzzy, especially the bottom one (for some reason) – I simply cannot get better than this!
I'd really like to get nice pictures of the moon, but I really cannot get past this. I am sorry for the slide-show, I want to convey "look, it isn't just out of focus!"
Note: I had to scale these pictures quite a lot, if you want a link to a full-size one, just tell me.
Yeah, you should link the pictures to full crops if you want to talk about sharpness. Also tell us what camera you're using.
@feetwet I'll do that now.
Also maybe tell us which lens?
At those focal lengths you really should be using a tripod if you want to get the best results. Are you?
Also, at those magnifications, atmospheric turbulence becomes an issue. Longer focal length and longer shutter speeds can dramatically increase that effect.
No, you don't need to calibrate it, you need to calibrate you expectations: I could be wrong, but (judging from the cap and the use of green for the metres) that looks a lot like the old Vivitar 500mm F8 from the stone age (Pentax mount?), maybe one of the cheapest and worst mirror lens ever sold, unable to be sharp even if shooting at a knife. So yes, your 500mm shots are more or less what one could expect from a mirror lens; the 1500mm shots, on the other hand, are a really an interesting mistery, as your 500mm is...well, a 500mm: how can you shot at 1500mm with a 500mm lens? Are you using a 3x teleconverter? If so...that is all the quality you can expect.
In both cases, if you just want a confirmation of its "qualities" and to just check the focus ring (but you'll need a lot of space for this) you can download from everywhere a front/back focus test image -I usually use this http://2.bp.blogspot.com/-1G5zs4s9ctQ/UwMLB2ZhJdI/AAAAAAAAAbY/f9fAF6zDQVk/s1600/Test+Focus+2.jpg - and do a bit of testing. It's easy, cheap and instructive.
Do use a sturdy support or tripod.
Focus on a point-source (a star, say) so there's no mistake about the subject interpretation.
Alternatively, you could use a laser on an optical bench to collimate your lens. Any collimated source is what you're after.
Manually, set infinity and squeeze off a shot.
Manually, find the best focus, and compare the focus-ring position difference if any.
Adjust the "ring" and compare.
Repeat until there's no difference.
I believe you will be adjusting the back-focus.
|
STACK_EXCHANGE
|
Some texts are not translated in the html build
When writing a book in another language (in my case is spanish), and with the correct language set in the sphinx section of _config.yml, some text appears in English.
Such is the case for the search box:
and the table of contents:
The messages generated by sphinx, for instance after a search, are ok:
To Reproduce
Steps to reproduce the behavior:
Create a book
Set the language to spanish
Build the book
See error
Expected behavior
All english messages are translated.
Environment
Python 3.8.3
Jupyter Book: 0.7.4
MyST-NB: 0.8.5
Sphinx Book Theme: 0.0.35
MyST-Parser: 0.9.1
Jupyter-Cache: 0.3.0
Operating System: Ubuntu 20.04
Additional context
Sphinx configuration section in _config.yml
sphinx:
config:
latex_elements:
fncychap: \usepackage[Glenn]{fncychap}
babel: \usepackage[spanish]{babel}
language: "es"
I had the same issue. While you can change the string "Search this book..." by overriding "search_bar_text" in configuration files, the string "Contents" seems to be hardcoded in topbar.html. Replacing this literal string by a variable should be simple to do. The same comment applies to some strings in footer.html (e.g., "Last updated on"). Thanks!
Yes we should definitely make it possible to customize the contents text. Would be happy to review any PRs that give this a shot. For reference, here's where contents is placed:
https://github.com/executablebooks/sphinx-book-theme/blob/master/sphinx_book_theme/topbar.html#L30
and here is where the search bar text is configured in the theme:
https://github.com/executablebooks/sphinx-book-theme/blob/master/sphinx_book_theme/sidebar.html#L14
Also opened up https://github.com/executablebooks/sphinx-book-theme/issues/197 to discuss i18n in this theme more generally 👍
In 3b002b4cd8f56d806c6c4f0e8816819a22cbb3d4, "Contents" is now auto-translated.
The search box default can be set by in your conf.py:
html_theme_options = {
"search_bar_text": "My default text"
}
There's no translation specifically available for the default text ("Search the docs ..."), so it would be a bit trickier to do that one automatically.
I'll close this then, and any further discussion can be had in #197.
actually the search text is now translated + all buttons and tooltips 😄
👏
Miguel Montes
On Wed, Sep 23, 2020 at 2:34 AM Chris Sewell<EMAIL_ADDRESS>wrote:
actually the search text is now translated + all buttons and tooltips 😄
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/executablebooks/sphinx-book-theme/issues/188#issuecomment-697143965,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABWDL6MYLWIEEZ7QM73VYP3SHGCGFANCNFSM4QFICAPQ
.
|
GITHUB_ARCHIVE
|
This page was created to provide information about Embedded Search.
Enterprise Search is related to standalone Enterprise Search, the term Embedded Search refers to the use of Enterprise Search within a or several Business Suite Applications. So SRM uses Embedded Search using Enterprise Search technology. The following figure describes the logic of this technology.
Embedded Search reads the data from the TREX server, it is considered a faster option than direct database read when dealing with large amounts of document data.
How does Embedded Search work?
A set of models is defined in the Enterprise Search Modeling Wizard to simulate the structure of the SRM Document
Available models are:
- Central Contract Header
- Central Contract Item
- Confirmation Header
- Confirmation Item
- Product Category
- Purchase Order Header
- Purchase Order Item
- Shopping Cart Header
- Shopping Cart Item
No support for Bidding or Invoicing in embedded search
Original Specifications called for various methods of updating/loading data to the enterprise search, the initial load was carried out via report ESH_ADM_INDEX_ALL_SC (where SC stands for search not shopping cart!) and the idea was for delta loads to be taken care of by Changepointer processing, Fast Push and Fast Delta Update.
Only the Changepointer processing was adopted due to performance concerns with the other two delta loading options. Changepointer processing allowed for the implementation of a scheduled job, expected to be run on a schedule of approx. every 2 minutes, the report is ESH_IX_PROCESS_CHANGE_POINTERS and this can also be executed directly from the ESH Cockpit as of NW 7.02 (So not available in any of our SRM systems yet). The report checked to see if any documents were ready to be loaded to TREX and processed accordingly.
Handled by a BAdI, BADI_ESH_IF_OBJECT_DATA interface IF_BADI_ESH_IF_OBJECT_DATA.
There are two main methods:
- IF_BADI_ESH_IF_OBJECT_DATA~NEXT - Package wise reading for initial load
- IF_BADI_ESH_IF_OBJECT_DATA~GET_DATA - Delta Load for Delta Processing
The search itself is transparent for users, it is not apparent to which agent the search is transmitted, the results are the same. Database search is default, if the ES flag is selected in BBP_BACKEND_DEST then Embedded Search is used:
It is possible to switch on (or off) Embedded Search for single users via a parameter in SU01.
Param ID: /SAPSRM/ES_ACTIVE
- Value = X (explicitly on)
- Value = O (explicitly off)
Method DETERMINE_ACTIVE_SERVICE of Class /SAPSRM/CL_SRC_SERVICE_FACTORY reads the user parameter:
- ESH_ADM_INDEX_ALL_SC – reindex all data (be careful as the all data in ES is deleted and ES is unusable for a while, only use in agreement with the customer)
- ESH_IX_PROCESS_CHANGE_POINTERS – Change pointer processing, plan job
- ESH_OBJECTS_INDEX_TEST – test indexing in dummy process (no F4 help – insert SW component (SRM_SERVER) and model name manually)
- ESH_TEST_SEARCH – test search in ES manually (helpful to test if search is working – for complete BO‘s or single indexes/tables)
- TREX_ADMIN – admin tool by TREX team – indexes etc .(you have to insert the RFC connection)
- Z_CC_TEST_SEARCH_AGENT_CLIENT - Search for object types:
SAP Help - Embedded Search
|
OPCFW_CODE
|
Does semi-sensitive information demand encryption / security?
Since it's been requested that this question be rewritten:
Is SSL (or some other encryption equivalent) required in the below case? From a small local business website, is there any damage that an attacker can do with semi-sensitive customer information (e.g. the list of items below)?
Original:
I've started working on redesigning a client's website. They are a small business and their website has a very basic system setup. No one pays online, but items are reserved by filling out a form. As of now, this information is submitted without any security / encryption. The information includes:
name
phone number
address
email
That is about as sensitive as it gets, i.e., there is no credit card information, passwords, or anything like that.
Does this kind of information constitute some kind of security (e.g. SSL)? How sensitive does information need to be such that security is required? Is there an industry standard?
A a user of web sites, I would want those personal pieces of my information kept secure from people attacking a business to build lists of that sort of thing. As a small business owner, I would want my database of (potential) customers and their contact information kept secret not only to protect those people, but also to dissuade my competition from attempting to steal this from me. As a normal person on the web, I wouldn't think twice about encryption/clear, until I read about some site getting in trouble -- then I'd avoid that site.
@mah I agree with that. To protect their contact information is there any other measure than ssl? I know getting a certificate can be somewhat expensive and from a small business owner's perspective the ROI seems low.
There are always other ways to communicate (Java applet using a custom communication protocol, for example) but nothing as a standard "use this instead of SSL". Keep in mind also that SSL does not only encrypt the communication, it also helps the client be certain they're talking to the site they think they are, and not a man in the middle.
There are free SSL certificates, and even normal paid certificates are relatively cheap. Get the cheapest certificate recognized by all relevant browsers. No need to get fancy. 2) Biggest issue with SSL is that older browsers need a distinct IP address per domain, which clashes with cheap virtual servers. 3) Alternative encryption methods, for example using javascript are significantly weaker since they can't protect against active attackers. They're also significantly more development effort, and thus most likely more expensive in practice.
Reasons to use HTTPS:
Encryption: As @mah's comment says, HTTPS traffic is difficult to read and / or change by a man-in-the-middle.
Verification of Identity: An SSL certificate is proof to the user that the content being served is actually coming from the server it purports to be.
In other words, HTTPS not only encrypts the information but also tries to guarantee that the server that the user is talking to, is truly the server they think it is. If your client's site is HTTP, a competitor or other party could fake the web site entirely so that the user is really talking to some other website and it is easy to imagine ways to abuse the customer of a company you don't like with a web site you control. Man-in-the-middle is not even needed then. To me, this is a good reason to use HTTPS in your case.
If the attacker tried this with against an HTTPS site, the browser should raise an alarm. These days, those alarms are difficult for the web user to not notice.
Name, address, phone, and email are all Personally Identifiable Information (PII). Although it's not likely to be a regulated business, you should protect your client by encrypting this information in transit by SSL and also protecting it on the system.
Ask yourself how you'd want your information to be transmitted if you were a customer. Most would think encrypting those details would be a good thing, and may perhaps even assume you are doing it as a matter of common sense. Think of it as the golden rule of data - treat others' data how you would want your own to be treated.
The personal information such as names, email addresses, phone numbers, and mailing addresses are not private. This is information that is meant to be shared with others. SSL does not really protect information that is already publicly available in more accessible formats such as the phone book.
(However, you do need a good privacy policy when storing and using people's personal information, to assure your users why you need their personal information, and what you intend to use it for. This is mostly because some organizations have a history of selling their databases of personal information against the wishes of their clients. SSL does not help with this, however.)
DO YOU NEED SSL?
Having a strong privacy policy does no good if you're choosing to transmit the data in the clear. Your policy, when followed, might say you won't intentionally release the information however if you don't encrypt the communication, the person sniffing the traffic might have a different view. As to "personal information is not private" -- perhaps yours isn't, but I keep my information private, giving it out only when I choose to. I have no legal recourse should those I give it to release it further perhaps, but that's a different matter.
The information may not technically be "private" (as it can be found in a phone book, etc.) but most of it, especially in combination with the other pieces, is considered PII and may be subject to regulation by law (current or near-future). Even if not, breach of the data (and the subsequent lawsuits - rest assured there will be some) could result in your company being the case that sets the legal precedent for new privacy laws.
|
STACK_EXCHANGE
|
determine the initial setting of SO_REUSEADDR. by another instance of the same application due to an unclean shutdown. If there is a security manager, its checkListen method is called with relative importance of short connection time, low latency, and high bandwidth. Basically, that port could be in use by another process, even perhaps http://wozniki.net/error-creating/error-creating-start-admin-server-for-weblogic-server-domain-lnk.html performed by an instance of the SocketImpl class.
Thanks This usually means you have another instance of problem HERE it's a stretch to think it would do it at work. Parameters:endpoint - The IP address & port number to bind the port argument as its argument to ensure the operation is allowed. I have run Windows setup at least 3 times - from the http://stackoverflow.com/questions/5614148/serversocket-fails-upons-construction in a SecurityException.
Livestreaming here would mean staying up until 4AM.67 accepted socket is determined by calling Socket.getReceiveBufferSize(). In Computer Science from Ubuntu 13.10. Oh Well . .
Why do not properly use a port number that is automatically allocated. Anybody see but I will have to check tomorrow to see if there is anything else. I can get to that page form any of our other possibility - that is interesting.
UltraVNC Discussions about UltraVNC and with the UltraVNC developers Skip to content Advanced search UltraVNC Discussions about UltraVNC and with the UltraVNC developers Skip to content Advanced search In this building we do not have DSL or a for example, then it could invoke this method with the values (1, 0, 0). Now, understand that the network works fine internally - I can Source anything using TCP I get the socket creation error message "unable to create socket". No results Sets the server socket implementation factory for the application.
If it is less than or equal to connection sharing software - I can't do anything requiring a browser. Void bind(SocketAddressendpoint, intbacklog) Binds which have different performance characteristics than TCP/IP. Therefore, this test is the better test for Solutions, is an expert in language-level security. All told I have reconfigured the system about help you troubleshoot.
This port number can then weblink He is the author or coauthor of nearly 80 be retrieved by calling getLocalPort. I can't even access the internal TCP server in the Linksys switch - which normally is automatically allocated.backlog - requested maximum length of the queue of incoming connections. PID should be a column, if you don't see it right
factory's createSocketImpl method is called to create the actual socket implementation. Can be length or may choose to ignore the parameter altogther. Have you tried manually entering http://wozniki.net/error-creating/error-creating-socket-irc.html the registry - which kills my fixes as fast as I install them. Pravir Chandra, Research Scientist at Secure Software the original author of Mailman, the GNU mailing list manager.
If it is less than or equal to you access to its setups and results - I can't get to that either. With this option set to a non-zero timeout, a call to just about everything except fdisk on MY workstation. If you have any Ubuntu 13.10.
The maximum queue length for incoming connection indications (a at hosing Wsock32 on 9X machines, creating the same symptoms. No old DNS settings leftover in in OpenSSL could easily take weeks. Note that I can PING any WS on my Learn More Red Hat Product Security Center Engage with our Red Hat Product Security team, us has been advertising/spyware software.
MORNING WOOD Lumber if, the channel itself was created via the ServerSocketChannel.open method. What would it take to Returns:the value of the his comment is here socket is created. He holds will continue to return the local address after the socket is closed.
Note, the value actually set in the Static void setSocketFactory(SocketImplFactoryfac) Thread: Error creating Socket !!!! ????? Cookies helfen uns bei to you solely for purpose of evaluation. problem.
This method allows the application to express its own preferences as to how full access Log In New to Red Hat? Any I realize you have dial up at this new site. in /path/to/GameQ/GameQ.php... I'll give accept() will throw a SocketException.
When a ServerSocket is created the front of me, but this thing has 8 ports which are routed/switched. Next, go to the Its exact semantics scope (default is 192.168.1.XXX, with the router at .1), then turn off it's DHCP service. I was plagued by that issue since I switched to
He is coauthor of Building Secure Software, Network Security and Cryptography created for a channelSince: 1.4 isBound publicbooleanisBound() Returns the binding state of the ServerSocket.
|
OPCFW_CODE
|
import csv
import itertools
import json
import math
import os
import time
from functools import partial
from multiprocessing import Pool
from statistics import mean, median, stdev
import click
import numpy as np
from diff_evolution.algo import (
ConstantDE,
ConstantSuccessRuleDE,
DifferentialEvolution,
RandomSuccessRuleDE,
init_population_uniform,
)
from diff_evolution.algo_control import RECORDING_POINTS, AlgorithmControl
from diff_evolution.cec17_functions import cec17_test_func
MAX_FES_FACTOR = 10000
TARGET_VALUE_FACTOR = 100
BOUNDS_1D = [(-100, 100)]
PROCESSES_NUM = 4
def run_single_problem(problem, de: DifferentialEvolution, max_fes=None):
dims, func_num = problem
if max_fes is None:
max_fes = MAX_FES_FACTOR * dims
bounds = BOUNDS_1D * dims
target_value = TARGET_VALUE_FACTOR * func_num
def call_cec(x):
fitness = cec17_test_func(x, dims=dims, func_num=func_num)
return fitness[0]
algo_control = AlgorithmControl(call_cec, max_fes, dims, target_value)
de.run(algo_control, bounds, init_population_uniform)
algo_control.fill_up_recorder_values()
return algo_control.recorded_values, algo_control.error()
def run_multi_problems(algorithm, problems):
run_process = partial(run_single_problem, de=algorithm)
with Pool(PROCESSES_NUM) as p:
resulsts = p.map(run_process, problems)
return resulsts
def generate_output(algorithm, algo_results, dims, func_num, output_path):
res_table = np.zeros((len(RECORDING_POINTS), len(algo_results)))
for i, algo_result in enumerate(algo_results):
res_table[:, i] = algo_result[0]
errors = [el[1] for el in algo_results]
metrics = {
"func.": func_num,
"best": f"{min(errors):.2E}",
"worst": f"{max(errors):.2E}",
"median": f"{median(errors):.2E}",
"mean": f"{mean(errors):.2E}",
"std": f"{stdev(errors):.2E}",
}
np.savetxt(
os.path.join(
output_path, f"{algorithm.__class__.__name__}_{func_num}_{dims}.txt"
),
res_table,
delimiter=",",
)
save_metrics_to_csv(
os.path.join(output_path, f"{algorithm.__class__.__name__}_metrics_{dims}.csv"),
metrics,
)
def save_metrics_to_csv(file_path, metrics: dict):
fieldnames = ["func.", "best", "worst", "median", "mean", "std"]
write_header = False
if not os.path.isfile(file_path):
write_header = True
with open(file_path, mode="a", newline="\n") as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
if write_header:
writer.writeheader()
writer.writerow(metrics)
def measure_performance(algorithm, output_path, dimensions = [10, 30, 50], functions = range(1, 31)):
for dims in dimensions:
for func_num in range(1, 31):
if func_num == 2:
continue
print(f"Running test for function {func_num}, {dims} dims.")
res = run_multi_problems(algorithm, [(dims, func_num)] * 51)
generate_output(algorithm, res, dims, func_num, output_path)
@click.command()
@click.option(
"--output-dir",
"-o",
required=True,
help="Output directory",
type=click.Path(exists=True, file_okay=False, dir_okay=True),
)
@click.option(
"--algo", "-a", required=True, help="Algorithm version name (class name)", type=str
)
@click.option(
"--dims", "-d", help="Dimensions to be tested", type=int
)
def run_measurements(output_dir, algo, dims):
alogrithms = {
ConstantDE.__name__: ConstantDE,
ConstantSuccessRuleDE.__name__: ConstantSuccessRuleDE,
RandomSuccessRuleDE.__name__: RandomSuccessRuleDE,
}
de = alogrithms[algo]()
if dims is not None:
measure_performance(de, output_dir, [dims])
else:
measure_performance(de, output_dir)
if __name__ == "__main__":
run_measurements()
|
STACK_EDU
|
Moodle 2.1.1+ (Build: 20110811)
After much mulling of pros and cons, we have adjusted capabilities so that a teacher is allowed to delete his or her course(s).
However, when logged into a course as the teacher, a tool for deleting the course is nowhere to be found.
Where is it hidden?
Thanks for any tips on finding it.
PS: We already know how to delete courses using system-wide admin capabilities, so that's not the issue here.
Hi Bill. Which permissions did you alter in order to be able to do this? (or to think you could do this, I should say) I haven't set this up but I would assume you would have to give the teacher at the very least category rights in order to go to the category page to be able to delete their course - a bit like a course creator.
Hi Bill and Mary,
Is this now possible in 2.1? In 1.9 I'd understood it had to be all or nothing: either a user can delete all courses - not just their own, or no courses. Is it now possble for a user (teacher, manager... whatever) to delete just their own courses? Or just courses in a certain category?
Very useful but also a little dangerous. Do all teachers realise that if they haven't made a backup, the course is gone for ever?
Per my original post, the decision was made after considerable thought. Not taken lightly.
But these questions remain:
(1) In 2.1, can a teacher be given the capability to delete his/her course(s) (but not anyone else's courses)?
(2) If so, what is the correct configuration to achieve that capability?
(3) When properly configured, where is the trigger located so the teacher can pull it?
You can give a teacher the delete course capability, but there is no 'delete me' link or button inside the course. Even if you are logged in as an admin, you won't find any sort of 'delete course' functionality within a course. If this is something you really want to do, you'll have to add it yourself.
As a quick test on the demo site, I gave teachers the moodle:course/delete capability and I stuck a "Delete this course" link in an html block - the link points to http://demo.moodle.net/course/delete.php?id=n (where n is the course id) - and it does allow the teacher to delete that particular course.
Thanks for your response.
So you're saying that I properly configured the capability, but there is no corresponding mechanism in 2.1 for utilizing the capability?
If so, that's kind of a tease, isn't it?
Any chance a link or button (I kind of like the mushroom cloud, myself) to utilize that capability will be added to Moodle's built-in functionality?
Re the "delete course" capability, it's because usually those who have the capability to delete courses usually have the capabilty to access categories (like a manager) and delete from the category page- you have never been able to delete a course from within a course - though Ann's workaround is obviously a good one.
Following Ms. Mary to say that you have to realize that you can't nuke yourself from within the course (we here call them "classrooms"). To commit hari kari, you must be one level above the classroom (the "context"), which is why you see mention of "category." Basically, you must be the avenging angel who can exist one level above the course to be nuked, just like Tibbets above Hiroshima.
Thanks. (And very evocative!)
How about this.
If for one of his/her courses (aka classrooms) a teacher wants to get rid of all user data, activities, blocks, etc -- essentially wipe the slate clean and start all over again -- what is the best procedure to follow?
Actually, Bill, you probably don't want to do that. You probably want to preserve the data as part of a student's educational record. Our policy is to keep a classroom accessible to students for 18 months. I hate to suggest checking with your legal division, but I would.
Now, you do have some options.
Somewhere, there is a reset button for a classroom. I've never used it, but I get the sense that it's kind of like the nuke option and leaves a hollow shell.
You can backup the course with no user data, which is what we do. This gives you all the resources and activities and so forth. The only downside is that teacher created forum discussion starters are stripped away.
You can make a template course (classroom), which is also what we do, enrolling teachers but no students. With a template, you can backup with user data (if you wish), preserving those forum discussion starters.
Oh, and looking at your "clean slate" request. You can create a new course from scratch! [Ben slaps forehead.]
Hi Marry There is a problem with this idea.
One course creator should be able to delete a course that he/she created. I give instructors to create a course in their category. Such that,
An math prof. can only create course in math category i manually assign them as a course creator at that category.
But at the end of the semester this math prof. should be able to delete his/her courses but shouldn't be able to delete other instructors' courses.
Site admin -> Users -> Permissions -> Define Roles -> Edit teacher role
Under the "Course" heading, the "Delete courses moodle/course:delete" "Allow" checkbox is checked (despite the "Users could destroy large amounts of information...." warning). When viewing, rather than editing, it says "Allow."
It was my expectation that enabling this capability would allow a teacher to nuke his/her course(s). I further expected that this capability would probably be represented by a button (perhaps a mushroom cloud?) somewhere within the course itself.
Can you tell me where I've gone astray?
|
OPCFW_CODE
|
Introducing <Meta /> Component to help users manage meta Infos
Background
#10
What changed
Introducing Component
Adding generation of fresh-seo.config.ts which includes meta config.
Usage
1. run init.ts script
This generates fresh-seo.config.ts from user's routes/
//fresh-seo.config.ts
import { Config } from "fresh-seo";
export default {
routes: {
"/": {
title: "index",
description: "index page",
},
"/about": {
title: "about",
description: "about page",
},
},
} as Config;
2. Add <Meta /> and related imports to page route components
//index.ts
import { Handlers, PageProps } from "$fresh/server.ts";
import { Head } from "$fresh/runtime.ts";
import seoConfig from "../fresh-seo.config.ts";
import { MetaProps, getMetaProps } from "fresh-seo";
import Meta from "fresh-seo/components";
interface Props {
metaProps: MetaProps;
}
export const handler: Handlers = {
GET(req, ctx) {
const metaProps = getMetaProps(seoConfig, req);
return ctx.render({ metaProps });
},
};
export default function Home({ data }: PageProps<Props>) {
return (
<>
<Head>
<Meta metaProps={data.metaProps} />
</Head>
<div>
{/* body */}
</div>
</>
);
}
sample project with demo
Maybe this inflicts too much user effort :thinking:
What do you think?
This really brings some hassle to the user. Seems easy enough, but we gotta do something about the boilerplate code.
@notangelmario Yeah, adding 10+ line codes to every route should be a pain!
How can we avoid this?
I can't figure out how to inject stuff into the user's DOM without forcing the user to add it explicitly during the SSR
here are a few points I think we can improve on to simplify the API a little bit (this is not exhaustive, and we will require some work to find the right API here).
For the route, it's actually pass down to the page automatically via PageProps so we could use this instead of relying on the request in the handler
In your example, getMetaProps is called in the handler but can theoretically be called in the page directly, removing the need for the handler.
Config file could be loaded automatically as we know what is the path, even though I would argue against config file and use another API for that.
If we manage to do all of this, we would boiled down the example to:
import { PageProps } from "$fresh/server";
export default function Home({ route }: PageProps) {
return (
<>
<Head>
<Meta route={route} />
</Head>
<div>
{/* body */}
</div>
</>
);
}
Even with all those optimizations, I think we still need to:
implement the <Meta /> component in every routes
pass down the current route to the component
With all of this to consider, I'm not sure if this would be useful as is. I don't think it would actually be easier if we can't allow use to.
Maybe provide a nice <Meta /> component is already a good enough feature without the centralized config file?
From what I see, there is only 1 way to make this better and both rely on making a PR to Fresh to implement additional required API.
Either Fresh provide a useRoute hooks (à la Next) to allow our <Meta /> component to access the current route without passing it down, so we can just implement it in the _app.tsx, or we have to wait until Fresh improve on the plugin system as initially stated.
@xstevenyung
Relying on the plugin system should fit more with Fresh's concise API.
So I prefer waiting for the improvement of the plugin system's utility of the render process.
Closing as the above.
|
GITHUB_ARCHIVE
|
By Faeran - 24.01.2020
Nicehash login problem
When you experience issues with login or any other similar web issues, our support staff might ask you for a screenshot of the console window. Login Problems. I'm locked out of my account. My password just stopped working, and when I reset it, the new password doesn't work either. I emailed support.
Generally, each file nicehash login problem have two nicehash login problem of copies: No, it's an oversight click their part since CryptoNight is the best coin to mine with CPUs at the moment.
Become a Redditor and subscribe to one of thousands of communities. The whole process of getting a wallet setup, downloading your miner, configuring things in Windows and setting up your batch file to run should take less than 10 minutes:.
Sign in to your account.
How can we help you today?
This sounds great and I was about to hit the buy button on 8x nicehash login problem cards for but I read that the DAG file for ethereum nicehash login problem going to reach over 3GB next year.
And keep a separate wallet with a portion of your overall funds—perhaps a mobile one—for those daily transactions.
I am not sure if this is a driver issue, or Nicehash login problem level. Whisper Implementation in Bitcoin Is there some kind of amd ethereum miner what is a bitcoin worth in bitcoin to communicate between nodes, which is similar to the Whisper Communication in Ethereum?
Casey Tucker L0L no one cares about nerd money. Intensity is nicehash login problem through the config source, elaborated new form of money not bitcoin all exchange bitcoin price the next section.
With the auto exchange being so slow, its pretty tough to gauge when it will nicehash login problem that auto.
That is click too, but does nicehash login problem make what I said untrue. How can I select which GPUs each miner will use?
About NiceHash Mining BSV in mempool
Submissions that are mostly about some other cryptocurrency or alternative mining pools belong. It's nice to read this nicehash login problem. Dismiss Document nicehash login problem code Every project on GitHub comes with a version-controlled wiki to give your documentation the high level of care it deserves.
Sign up to get your own personalized Reddit experience!
No crashes for me so far nicehash login problem the newest release, cpuminer-opt 3. For this reason, only adjust the "template" files, since the modified files are overwritten at each launch. No referral links in submissions. The template files all have instructions bitcoin mega mining bitcoin faucet https://tovar-id.ru/2020/cryptotab-hack-script-2020.html nicehash login problem the values can be nicehash login problem.
Monero mining without blockchain amd miner zcash
Source there's always the possibility of user error i. I made an exception through Windows Defender but it still won't start.
Nicehash login problem is a problem related to nicehash servers. Already on Nicehash login problem Or I create new click nano s seed electrum bitcoinwallet with trezor to create a new You can see more of tethered drone power, monero wallet gui and adam venit wme clients.
Take notice of bitcoin gold exchange reddit, canon eos m mirrorless camera and tetherball set for schools. I have tried to switch to other "Service Location", but it still have the same issue.
Coin Market Cap Bitcoin Futures. Have a question about this project? The issue is Nicehash, not our CPUs. Automatic payments in bitcoins - daily or weekly Minimum payout nicehash login problem.
Mining app not working https: I've pointed it at the manually downloaded miner, but now it's nicehash login problem it can't start the process: How can Can civic coin be mined can i mine bitcoin on gpu fix it?
You signed out in another tab or window. October edited October I reckon just try running Claymore and see what it says. Thanks for your time. QuintLeo Legendary Offline Activity: It's a waste of time and money to enlarge the pool resources so weaker miners can actually mine CryptoNight.
Skip to content. Rodrigo de Azevedo 47 9. I want to show that you don't have to be a computer geek to get into.
Is there any online bank, I can do this with? Life seemed merely a succession of bills and worrying about how to pay. I guess you are probably right, but I don't think NiceHash staff is too worried nicehash login problem this since there is no update regarding this issue for nearly 2 months now. How can I fix it?
Anybody noticed the minimum has gone up to. You could get hacked. Now disable nicehash login problem, and enable one nicehash login problem the remaining dual miners without a benchmark and go back to playing with the dcri value.
Full Nicehash login problem Offline Activity: Generally, each file will nicehash login problem two types of copies:. Be part of the Nicehash team!
Yes, but not ready to fire and forget. For adjusting CPU intensity, modify the cpu. Monero mining without blockchain amd miner zcash I had to read friedman bitcoin does google have bitcoin through about 10 times before I understood how to change the settings nicehash login problem make it work.
Currently Chinese bitcoin exchanges have suspended bitcoin withdrawals for the last thirty days and are expected to lift the suspension as soon click at this page the PBOC negotiations complete.
Is there nicehash login problem way I can directly import and connect to trezor? April 23,Try poloniex verification, cboe bitcoin futures live stream and cardano ada price aud absolutely free!
Reinstalling that most profitable coin to mine 2020 time program may fix this problem. With the auto exchange article source to bitcoin cash reddit xrp premine so slow, its nicehash login problem tough to gauge when it will hit that auto.
Attached please find the screenshot and log file. There antminer s9 monthly profit antminer s9 payoff a nicehash login problem of the process hereand this is the Windows process: These will be generated on the first time running Xmr-Stak.
I'm a bot, beep boop Downvote to remove Contact me Info Opt-out. Lol, I found the same article yesterday and now i'm running awesome miner nicehash login problem nicehash login problem.
Has anyone tried MultiPoolMiner? Ok so you nicehash login problem mining pool hub? Can an ethereum private blockchain network have millions of nodes?
New cards, low hash rates for mining?
Price graphs for numerous coins. This error is happening to me as well sometimes in all locations Sebastian's success started when he discovered cloud mining.
And why would they?
Click and try gdax to binance eth and poloniex stop nicehash login problem orders absolutely free! I am new to NiceHash Miner See nicehash login problem You can see more of neopets the darkest faerie iso, binance app android and eosinophilic oesophagitis cancer.
I have installed v1. See more of gdax to binance eth, neo cryptocurrency wikipedia and coinstar fees at walmart after just one click!
Whisper Implementation in Bitcoin Is there some kind of implementation in bitcoin to communicate between nodes, which is similar to the Whisper Communication in Ethereum?
Eth credited 0 means you have 0 credited to the nicehash login problem wallet where you can cash out eth. Dismiss Join GitHub today GitHub is home to over 36 nicehash login problem developers working together to host and review nicehash login problem, manage projects, and build software together.
Payout when I can nicehash login problem. I'm going to have to start. Do you need to do something special to set it up? Thanks Felix. NHML writes a warning comment in the file header for the modified versions so you can tell if you see.Let's Try Nicehash OS!
I know nicehash login problem gotta set up an account with them as you don't best app to get bitcoin brute forcer cracker the gpus from your PC with simplemining, it's done from a see more you access on a normal Windows pc.
Save it to a separate notepad text https://tovar-id.ru/2020/is-mining-still-worth-it-2020.html and save it on your Desktop for easy access Note: Be part of the Nicehash team! Want to add to the discussion? Under Awesome miner just make a profile for each gpu.
You signed in with another tab or window. It randomly crashes on me without any error messages even how to install bitmain s5 amazon s9 ethereum whisper coinbase isnt letting me pay with paypal I add nicehash login problem to the end of the start.
Nicehash mining pool gone nicehashminer xmr-stak-cpu dual cpu mining link Quote nicehash login problem.
But, as more and more people became involved in the practice, the difficulty jobs for ethereum bitcoin mempool cash bcash up. The template files all have instructions on how the values can be tweaked.
Are you the OP from reddit? GitHub is home to over 36 million developers working together to nicehash login problem and review code, manage nicehash login problem, and build software. Say you nicehash login problem bitcoin a few years ago—you could be a millionaire right now, and you might want to spent some of that money.
It does this quite often through the vanguard owner on bitcoin profit calculator ethereum, so your balance will fluctuate.
- ark ecosystem delegates
- bitsdaq coin price prediction
- crypto loans reddit
- scrypt asic resistant
- brian armstrong age
- stratum pool drop lp
- is etoro a good broker
- how to buy bitcoin online in botswana
- dogecoin buy sell
- nbed exchange
- free dogecoin earning
- props coin market cap
- best cryptocurrency to buy august 2019
- steam app dll
|
OPCFW_CODE
|
From the KnowledgeBase
Linux: Tips for secure and safe installation and operation
Before attaching a Linux computer to the campus network, itís very important to ensure that it is secure. If the proper precautions are not taken, it is very possible for a new Linux machine to get hacked within minutes of connecting to the network. The following are a set of tips for safely operating your computer. If you are an inexperienced user, OIT strongly encourages that you take the time to read about and understand the security issues involved with the Operating System before plugging into the network.
If you have purchased a factory install of Linux, it may be advisable to remove it and start fresh. You never know what may have already been installed, and you will give yourself greater control and understanding of the system by installing it from scratch. There are many different distributions of Linux available. OIT will not recommend any of these distributions over another, but RedHat is a commonly used client and this document will refer to RedHat specifics. For an outline of the available distributions, please visit:
Keep your computer unplugged from the network while installing. Most distributions have similar install options. Please watch for the following install options:
1. What kind of security do you want on your computer?
We strongly encourage you to choose high security.
2. Do you want the network turned on when you boot your machine?
As a beginner, we recommend that you choose no. Once youíve studied and understood what is involved with networking, you will be able to enable networking on boot-up.
Software / Patches
Make sure to keep your install of Linux at the latest revision level. It is possible to get automatic updates and patches for your computer. For RedHat specific installs you can sign up for RedHat Network (RHN). To do this, run the command up2date at the command line. This works much like the Software Update feature on the Macintosh and Windows Update on PCs.
Most distributions come with TCP wrappers and IPtables installed. Make sure
you are running this. For details, please
You should disable all nonessential daemons (i.e. NFS, Bind).
Always use ssh and scp instead of telnet and scp. This will ensure a secure connection and encryption.
If you have a /etc/inetd.conf file, edit it to comment out such things as telnet, talk and finger.
These are a few pieces of software that can help you insure the security of your Linux computer:
TripWire keeps a database of your system. If you suspect something has changed,
you can use this database to check your suspicions. Please
Sudo logs all commands executed as root user (or superuser) and allows you
to control user access to root commands. Please
Bastille-Linux is an easy to use Linux firewall. Please
Resources / News
For updated information and news on the latest Linux security issues, you may find the following web sites useful.
OIT's Unix Systems' Security page http://www.princeton.edu/~essweb/linux/linuxsecurity.html
For information on general Linux security, please
You may also find some useful newsletters at this web site. There are Linux
newsgroups that mail security tips and
|
OPCFW_CODE
|
What's wrong with an ISP refusing to give up bandwidth for free?
The customers' are already paying for the bandwidth, the ISP's want to charge the content creators as well.
Could you imagine if you wanted to send a package, and both you and the recipient had to pay for the shipping?
Now the delivery company is making twice as much money on the same work, because they are getting paid twice to do it.
So you're saying Netflix shouldn't have to pay for internet access? By that logic all the sites I host shouldn't have to pay for bandwidth either.
Whats happening though is Netflix is already paying for bandwidth. On top of that other ISPs are telling them they have to pay for access to that customers network.
To expand on your postage example, that would be like if a town told Amazon they had to pay $$$ to get their packages into city limits.
Netflix pays for their internet connection. We pay for ours. Netflix should not have to pay for ours nor should we have to pay for theirs. You know exactly what /u/PrimeLegionaire meant. ISPs are trying to double dip.
That's why I thought he was talking about Netflix not having to pay for bandwidth. I've always been on the same page when it comes to ISP's double dipping.
But whatever, it's reddit.
Load more comments
Chances of being caught or anyone caring? Pretty much 0.
I have it. Used it once only. Unless there's a game you want to play that uses it, I'd say save your money.
This works just fine though http://www.tekrevue.com/tip/chrome-high-dpi-mode/
I'd rather have a stable version of chrome.
Probably just keeping track of daily vitamins?
I just had to do this as well but opted to just start over. Copying thousands of tiny meta data files was just taking way too long. Really wish it was all in one big db.
Lots of tiny files shouldn't be any slower than one big file of the same size. It isn't really an effort issue either. Just copy the folder it all resides in... From a UX standpoint copying one folder is the exact same as copying one file.
Edit: Here is the folder path for each platform. Is it really that difficult to copy one folder instead of one file?
It was definitely a performance issue having all those files. I could move a multiple TB file with no problem at all. Moving 13 gigs of files (the size of my plex directory, not counting the cache) would take over 4 hours. I can easily copy the same 13 gigs zipped in a fraction of that time, although generating the zip and extracting take up a ton of time.
Bitcasa has this same problem. They cache files into thousands of meta data files which end up slowing everything down then it has to overwrite the files when the cache is full or when you manually empty it.
Maybe it's an issue with NTFS, I dunno. All I know is it's very slow.
Any bets as to whether he was trying to reach and grab onto scenery...
I see so many people that just act like idiots at parks. Most of these same people wouldn't mess around near heavy equipment at a construction site. Yet they feel they are safe to blatantly disregard any safety instructions given in a situation designed for amusement.
Hell I was absolutely amazed at one point where there were videos posted of people hanging on the bridge over the rapids in Thorpe Park in the UK to move between rafts. I wish I didn't disregard all these things as being rider error but so often it is that I am just so jaded to riders misbehaving.
Every time I've been on splash mountain and pirates there's been people reaching into the water to splash their friends.
This is what ends up causing the parks to install barriers to prevent people from reaching out.
How the hell did that side to side keyboard ever get it passed the final product? Who the hell created that and decided it has more productivity than a QWERTY keyboard? Who approved it? That keyboard should be burned with hellfire. I'm surprised it even made it to the Xbox one, we've had this keyboard for quite sometime on some 360 apps.
Sad part is its not just the xbox one version. It's on a lot of tvs that way too. Absolutely stupid. I expect better from google.
|
OPCFW_CODE
|
How do I build a regression model with integer constraints on parameters?
My question is similar to: How do I fit a constrained regression in R so that coefficients total = 1? except that I am interested in a solution to the following constraints on the parameters:
All $\pi_i$ should be -1, 0 or 1.
Basically: How do I fit a regression when the only allowed weights are -1, 0 or 1?
Canned software packages and modules don't always offer such options. On the other hand, SEMs and/or nonlinear models typically offer the most flexibility in this regard. Of course, both assume that an explicit model of some type has been formulated. In other words, these are not exploratory, variable selection procedures.
How many $\pi_i$ 's are there? Are the $\pi_i$'s the only parameters to be estimated, or are there also continuous parameters? This should be solvable by use of (mixed) integer quadratic programming, for which there are many off the shelf solvers, both free and commercial. The actual computational difficulty for the solver to find the optimal solution depends on the size and difficulty of the problem. Another option would be to solve this as a (mixed) integer Second Order Cone Problem (SOCP), which may or may not be easier (faster) to solve, but requires more knowledge to formulate.
As per @hxd1011 's answer, if the number of integer parameters to be estimated s sufficiently small, brute force evaluation of the sum square of residuals for all possible combinations of parameter values is another option. The methods I mentioned in the preceding comment can be many times faster than brute force evaluation, because they are able to intelligently prune out possibilities, as the algorithm proceeds, which must be inferior to already evaluated parameter value combinations.
I just came across this ready to go MATLAB package which explicitly deals with (mixed0 integer least squares problems http://cs.mcgill.ca/~chang/software/MILES.php .
This seems to be a discrete optimization problem, where you have integer (some of the decision variables / parameters are discrete) constrains on the decision parameters. Comparing to continuous optimization, discrete optimization is much harder to solve: many continuous optimization problems can be solved in $P$ time but most real world discrete optimization problems are $NP$.
If your problem is not in large scale, you can run a brute-force search. For example, if you have $10$ parameters and each parameter has $3$ possible values, the search space is $3^{10}=59049$, which can be done in seconds with modern computer. On the other hand, please note, the search space grows exponentially, if you have $20$ parameters, the brute force is not feasible.
When I say search, I mean, try different configurations of parameters and calculate the loss/objective function. (for example, squared loss in regression). Check for the configurations with lowest loss. Here is an example on mtcars data with $2$ parameters, each of them takes $\{-1,0,1\}$. In this toy example, the optimal solution is $(0,0)$, and has minimal loss of $14042.31$.
As mentioned earlier, if you have more parameters, then, such approach is not feasible. You may want to do two things.
Integer programming
Local search
Integer programming one will give you exact answer with high computation cost, and local search is fast, but may give you sub-optimal answer. Each one is a huge topic you can explore and I think it is hard to explain them in detail here.
Edit: as Mark mentioned in the comment, comparing to general discrete optimization problem, the problem has a special structure: the problem is the objective function is quadratic, therefore Mixed Integer Quadratic Programming (MIQP) software can be used. In addition,
Another option would be to solve this as a (mixed) integer Second Order Cone Problem (SOCP), which may or may not be easier (faster) to solve, but requires more knowledge to formulate.
I don't think constraint programming is going to do the trick here.
I also think your "Local Search" link will just confuse matters.
replaced with wikipeida link
I think the better solution is to fit the normal regression without any constraint. Then to do the search only on the voisinage of the continuous solution!
Thank you very much hxd1011 and @MarkL.Stone -- there are many $\pi_i$. Nevertheless a good idea to do brute force for small cases. Can you elaborate a bit more for integer programming? Are there readily available tools which achieve that?
Mattemattica: That's an interesting idea too, does it work?
I don't think the proposal by @Matemattica makes any sense at all.
@DreamFlasher You want to use Mixed Integer Quadratic Programming software, which has the acronym MIQP.
@MarkL.Stone Sorry I didn't read carefully the question. I thought that the constraints are that all the coefficients should be integer!
hxd1011 @MarkL.Stone MIQP looks like what I want, hxd1011 do you want to add that to your answer or do you MarkL.Stone want to add another answer -- and then I'd accept either? :)
@DreamFlasher let me revise my answer and add Mark's answer.
|
STACK_EXCHANGE
|
import argparse
import multiprocessing
import os
import configparser
import librosa
import scipy
import numpy as np
eps = np.spacing(1)
class feature_extractor(object):
def __init__(self, conf_path):
""""
Generate feature from raw audio.
Args:
conf_path: string
the path of configuration
Attributes:
conf_path
n_fft
n_mels
f_max
f_min
LEN
hop_length
win_length
sr
hop_length_second
win_length_second
Interface:
init_extractor_conf: Initialize most of attribute values from feature configuration file.
get_feature: Extract feature from raw audio file.
get_feature_for_single_lst: Get feature of audios in a file list.
get_feature_for_lst: Generating feature for a file of the audio file list utilizing multi-threading.
"""
self.conf_path = conf_path
self.init_extractor_conf()
def init_extractor_conf(self):
""""
Initialize most of attribute values from feature configuration file.
Args:
Return:
"""
conf_path = self.conf_path
assert os.path.exists(path)
config = configparser.ConfigParser()
config.read(path)
assert 'feature' in config.sections()
feature_cfg = config['feature']
self.n_fft = int(feature_cfg['n_fft'])
self.n_mels = int(feature_cfg['n_mels'])
f_max = feature_cfg['f_max']
if not f_max == 'max':
f_max = int(f_max)
f_min = int(feature_cfg['f_min'])
assert f_max > f_min
self.f_max = f_max
self.f_min = f_min
self.LEN = int(feature_cfg['LEN'])
hop_length = float(feature_cfg['hop_length'])
win_length = float(feature_cfg['win_length'])
sr = int(feature_cfg['sr'])
self.hop_length = hop_length
self.win_length = win_length
self.sr = sr
self.win_length_second = int(sr * win_length)
self.hop_length_second = int(sr * hop_length)
def get_feature(self, input_file, output_file):
""""
Extract feature from raw audio file.
Args:
input_file: string
the path of raw audio
output_file: string
the path to store feature
Return:
final_feature: numpy.array
the feature of the audio
"""
sr = self.sr
n_fft = self.n_fft
n_mels = self.n_mels
f_min = self.f_min
f_max = self.f_max
win_length_second = self.win_length_second
hop_length_second = self.hop_length_second
LEN = self.LEN
#load raw audio signal from file
y, _ = librosa.load(input_file, sr = sr)
#hanning window
win = scipy.signal.hann(win_length_second, sym = False)
#Mel filter banks
mel_basis = librosa.filters.mel(sr = sr, n_fft = n_fft, n_mels = n_mels,
fmin = f_min, fmax = f_max, htk = False)
#Fast Fourier transform
spectrogram = np.abs(librosa.stft(y + eps,
n_fft = n_fft,
win_length = win_length_second,
hop_length = hop_length_second,
center = True,
window = win))
#mel spectrum
mel_spectrum = np.dot(mel_basis, spectrogram)
#log mel spectrum
log_mel_spectrum = np.log(mel_spectrum + eps)
feature = np.transpose(log_mel_spectrum)
flen = int(sr * 10 / hop_length_second + 1)
#If the duration of the audio is less than 10s, padding
if feature.shape[0] < flen:
new_feature = np.zeros([flen, feature.shape[1]])
new_feature[:feature.shape[0]] = feature
else:
new_feature = feature[:flen]
#if the number of frames of the feature doesn't match the expected number of output frames, calculate the difference.
if not LEN == flen:
squeeze = (flen-LEN) // 2
if squeeze < 0:
#if the number of frames of the feature is smaller, then pad
final_feature = np.zeros([LEN, new_feature.shape[1]])
final_feature[-squeeze:LEN + squeeze] = new_feature
else:
#if the number of frames of the feature is bigger, then intercept
lsq = squeeze
rsq = flen-LEN-squeeze
final_feature = new_feature[lsq:flen-rsq]
#save feature
np.save(output_file, final_feature)
return final_feature
def get_feature_for_single_lst(self, lst, wav_dir, feature_dir, id):
""""
Get feature of audios in a file list.
Args:
lst: list
a file list of audio files
wav_dir: string
the dir where the audio files are stored
feature_dir: string
the dir where the feature files will be stored
id: integer
the process number
Return:
"""
for f in lst:
input_file = os.path.join(wav_dir, f + '.wav')
output_file = os.path.join(feature_dir, f)
print('strat processing %d : %s'%(id, f))
if os.path.exists(output_file + '.npy'):
print('process %d : %s exists'%(id, f))
continue
self.get_feature(input_file, output_file)
print('process %d : %s done'%(id, f))
def get_feature_for_lst(self, lst, wav_dir, feature_dir, processes):
""""
Generating feature for a file of the audio file list utilizing multi-threading.
Args:
lst: list
a file list of audio files
wav_dir: string
the dir where the audio files are stored
feature_dir: string
the dir where the feature files will be stored
processes: integer
the number of processes
Return:
"""
with open(lst) as f:
lsts = f.readlines()
lsts = [f.rstrip() for f in lsts]
#the number of audio files to process per process
f_per_processes = (len(lsts) + processes-1) // processes
for i in range(processes):
st = f_per_processes * i
ed = st + f_per_processes
if st >= len(lsts):
break
if ed > len(lsts):
ed = len(lsts)
sub_lsts = lsts[st:ed]
p = multiprocessing.Process(
target = self.get_feature_for_single_lst,
args = (sub_lsts, wav_dir, feature_dir, i + 1))
p.start()
print('process %d start'%(i + 1))
def extract_feature(wav_lst, wav_dir, feature_dir, feature_cfg, processes):
""""
Generate feature.
Args:
wav_lst: string
a file list of audio files
wav_dir: string
the dir where the audio files are stored
feature_dir: string
the dir where the feature files will be stored
feature_cfg: string
the path of configuration
processes: integer
the number of processes
Return:
"""
fextractor = feature_extractor(feature_cfg)
fextractor.get_feature_for_lst(wav_lst, wav_dir, feature_dir, processes)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description = '')
parser.add_argument('-l','--wav_lst', dest = 'wav_lst',
help = 'the list of audios')
parser.add_argument('-w','--wav_dir', dest = 'wav_dir',
help = 'the audio dir')
parser.add_argument('-f','--feature_dir', dest = 'feature_dir',
help = 'the ouput feature dir')
parser.add_argument('-c','--feature_cfg', dest = 'feature_cfg',
help = 'the config of featrue extraction')
parser.add_argument('-p','--processes', dest = 'processes',
help = 'the number of processes')
f_args = parser.parse_args()
wav_lst = f_args.wav_lst
wav_dir = f_args.wav_dir
feature_dir = f_args.feature_dir
feature_cfg = f_args.feature_cfg
processes = int(f_args.processes)
paths = [wav_lst, wav_dir, feature_dir, feature_cfg]
for path in paths:
print(path)
assert os.path.exists(path)
extract_feature(wav_lst, wav_dir, feature_dir, feature_cfg, processes)
|
STACK_EDU
|
error: Processing of node_modules/react-relay-network-layer/lib/middleware/gqErrors.js failed. SyntaxError: Unexpected token (80:376)
I'm using Brunch and this function makes compilation fail
function noticeAbsentStack() {
return '\n If you using \'express-graphql\', you may get server stack-trace for error.\n Just tune \'formatError\' to return \'stack\' with stack-trace:\n\n import graphqlHTTP from \'express-graphql\';\n\n const graphQLMiddleware = graphqlHTTP({\n schema: myGraphQLSchema,\n formatError: (error) => ({\n message: error.message,\n stack: process.env.NODE_ENV === \'development\' ? error.stack.split(\'\\n\') : null,\n })\n });\n\n app.use(\'/graphql\', graphQLMiddleware);';
}
18:12:20 - error: Processing of node_modules/react-relay-network-layer/lib/middleware/gqErrors.js failed. SyntaxError: Unexpected token (80:376)
at Parser.pp$4.raise (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:2488:13)
at Parser.pp.unexpected (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:623:8)
at Parser.pp.semicolon (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:600:59)
at Parser.pp$1.parseReturnStatement (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:894:55)
at Parser.pp$1.parseStatement (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:744:32)
at Parser.pp$1.parseBlock (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1040:23)
at Parser.pp$3.parseFunctionBody (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:2362:22)
at Parser.pp$1.parseFunction (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1132:8)
at Parser.pp$1.parseFunctionStatement (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:868:15)
at Parser.pp$1.parseStatement (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:739:17)
at Parser.pp$1.parseBlock (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1040:23)
at Parser.pp$3.parseFunctionBody (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:2362:22)
at Parser.pp$1.parseFunction (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1132:8)
at Parser.pp$3.parseExprAtom (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1999:17)
at Parser.pp$3.parseExprSubscripts (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1872:19)
at Parser.pp$3.parseMaybeUnary (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1849:17)
at Parser.pp$3.parseExprOps (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1791:19)
at Parser.pp$3.parseMaybeConditional (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1774:19)
at Parser.pp$3.parseMaybeAssign (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1750:19)
at Parser.pp$3.parseParenAndDistinguishExpression (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:2056:30)
at Parser.pp$3.parseExprAtom (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1978:41)
at Parser.pp$3.parseExprSubscripts (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1872:19)
at Parser.pp$3.parseMaybeUnary (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1849:17)
at Parser.pp$3.parseExprOps (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1791:19)
at Parser.pp$3.parseMaybeConditional (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1774:19)
at Parser.pp$3.parseMaybeAssign (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1750:19)
at Parser.pp$3.parseExpression (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1722:19)
at Parser.pp$1.parseStatement (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:777:45)
at Parser.pp$1.parseBlock (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1040:23)
at Parser.pp$3.parseFunctionBody (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:2362:22)
at Parser.pp$1.parseFunction (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1132:8)
at Parser.pp$3.parseExprAtom (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1999:17)
at Parser.pp$3.parseExprSubscripts (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1872:19)
at Parser.pp$3.parseMaybeUnary (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1849:17)
at Parser.pp$3.parseExprOps (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1791:19)
at Parser.pp$3.parseMaybeConditional (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1774:19)
at Parser.pp$3.parseMaybeAssign (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1750:19)
at Parser.pp$3.parseExprList (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:2418:20)
at Parser.pp$3.parseSubscripts (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1900:29)
at Parser.pp$3.parseExprSubscripts (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1875:21)
at Parser.pp$3.parseMaybeUnary (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1849:17)
at Parser.pp$3.parseExprOps (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1791:19)
at Parser.pp$3.parseMaybeConditional (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1774:19)
at Parser.pp$3.parseMaybeAssign (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1750:19)
at Parser.pp$3.parseExpression (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:1722:19)
at Parser.pp$1.parseStatement (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:777:45)
at Parser.pp$1.parseTopLevel (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:672:23)
at Parser.parse (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:529:15)
at Object.parse (/Users/felixdescoteaux/Projects/expenses/node_modules/acorn/dist/acorn.js:3378:37)
at parse (/Users/felixdescoteaux/Projects/expenses/node_modules/detective/index.js:9:18)
at Function.exports.find (/Users/felixdescoteaux/Projects/expenses/node_modules/detective/index.js:44:15)
at module.exports (/Users/felixdescoteaux/Projects/expenses/node_modules/detective/index.js:23:20)
at /Users/felixdescoteaux/Projects/expenses/node_modules/deppack/lib/explore.js:61:43
at sourceFile (/Users/felixdescoteaux/Projects/expenses/node_modules/deppack/lib/explore.js:104:18)
I don't know why... it seems super odd to me, this string seems legit and it was added 1 year ago.
When copy pasting this function in the console, everything works fine
When I replace the string by an empty string my app compiles without any problem.
wtf? thanks! 😄
oddly enough, it worked when I moved the folder out of node modules and imported the lib from there 🤔
Really very strange.
I'm using react-relay-network-layer v2.0.1 and my file gqErrors.js ends with 77 line. So line 80 does not exist.
It seems that problem in your bundler Brunch. I'm using Webpack and it makes vendor bundle for me without compile errors.
I'm close this issue for now cause I don't know what to do. If you'll find the root of problem please drop a line with a solution for further googlers.
Thanks.
Thanks @nodkz, this is what I suspect as well, opened an issue in the brunch repo 👍
|
GITHUB_ARCHIVE
|
Improve Tokio Metrics
Hey,
thanks for integrating tokio metrics!
In the current form using the cumulative monitor I don't think they are of much use tho. I have three points I'd like to discuss on this:
Could we change the default output to a window of a second or maybe 5 seconds? That way by frequently polling we can keep a track of the performance at specific times. If people want, they can still accumulate the data points in a consumer.
Can we somehow reformat the output to correct json?
Can we maybe add a dynamic system that adds a new monitor for every new endpoint it encounters? Seems non trivial to implement for me as I don't now if we can somehow find out to which endpoint some request will be forwarded from within the middle ware, but might be interesting to find out which endpoints have the most performance impact.
I am currently creating a simple UI for this, maybe I'll PR that one in as an endpoint similar to Swagger_UI once I'm happy with it.
I'll take a look at the first two, if anyone has experience in these matters, please feel free to chime in.
Continuation of #206 , @attila-lin
Agree.
Add a UI for the result would be great!
Following @Christoph-AK's suggestions, I improved the TokioMetrics middleware in the master branch.
@sunli829 Thanks for looking into this!
With the current version I can't seem to find a way to access the data 😅
I guess in the best case we had a 'metrics_summary' endpoint that gives a cumulated summary of all running monitors (Sum of 'times and counts'-stats, maximum of the 'highest time'-stats), and a 'metrics_details' endpoint that lists all monitors separately.
Working nicely so far!
As I said I'm unfortunately horrible at everything frontend related, but got some basic monitoring running (sum of all endpoint calls):
Funnily the browser does make a call to /favicon that gets 404ed, so every page refresh counts as at least 2 requests.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Tokio Metrics</title>
<!-- <link rel="stylesheet" href="./style.css"/> -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/3.7.1/chart.min.js"
integrity="sha512-QSkVNOCYLtj73J4hbmVoOV6KVZuMluZlioC+trLpewV8qMjsWqlIQvkn1KGX2StWvPMdWGBqim1xlC8krl1EKQ=="
crossorigin="anonymous" referrerpolicy="no-referrer"></script>
<!-- <script src="lib.js" defer></script> -->
</head>
<body style="background: #333; color: #ddd;">
<canvas id="myChart" style="max-height: 300px;"></canvas>
<script>
const ctx = document.getElementById('myChart');
const arrayLength = 100;
const refreshTime = 5000;
const url = "http://localhost:3000/metrics";
Array.prototype.push_rotating = (function (element, limit) {
if (this.length < limit) {
this.push(element)
} else {
this.shift();
this.push(element);
}
})
let data = [];
let i = 0;
for (let j = -arrayLength; j < 0; j++) {
data.push({
time: j * (refreshTime / 1000.0),
ping: null
})
}
let add_fetching = (async function () {
let new_count = await fetch(url).then(response => {
if (!response.ok) {
throw new Error('Network response was not OK');
}
return response.json();
}).then(body => {
console.log(body);
let count = 0;
for (let i in body) {
count += body[i]["instrumented_count"];
}
return count
});
console.log(new_count);
data.push_rotating({
time: i,
ping: new_count
}, arrayLength);
i += refreshTime / 1000.0;
console.log(i);
console.log(data);
console.log(data.map(o => o.time));
console.log(data.map(o => o.ping));
})
const myChart = new Chart(ctx, {
type: 'line',
options: {
scales: {
x: {
ticks: {
display: false
}
}
}
}
});
let update_chart = (function () {
myChart.data = {
labels: data.map(o => o.time),
datasets: [{
label: 'RequestCount',
data: data.map(o => o.ping),
borderColor: "rgb(155, 102, 102)",
backgroundColor: "rgb(155, 102, 102)",
}]
};
myChart.update('none')
});
update_chart();
var t = setInterval(function () {
add_fetching();
update_chart()
}, refreshTime);
add_fetching();
</script>
</body>
</html>
It looks pretty good👏🏻, I'll improve this later and embed the UI in poem.
I thought about the performance impact of the current implementation tho. A new monitor gets instantiated for each unique URI, meaning people/1 and people/2 each get their own monitor running, potentially meaning thousands of new monitors per second. Is this okey, especially with the locking going on with each request? Maybe we could make the 'simple' implementation with only 1 monitor available optionally. Changing this at startup time should be okey initially.
I improved again. 😂 @Christoph-AK
|
GITHUB_ARCHIVE
|
How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect
I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
The only way I can think of doing this would be to implement or include a 3d engine into your project that includes options for dynamic lighting. Even if your application is fully 2D the concept you're implying is a 3D problem based on the position of the light, the object causing the shadow, and the surface the shadow is reflected on.
I assume you have tried working with the .shadowPath property?
Yes, I tried the shadowPath, but I am not quite familiar with getting the shadowPath based on the original shape of the UIImageView content (which is an UIImage with transparent background) --- Maybe this is the key point of this question. The tutorial in nachbaur.com does mention creating shadowPath. But it is creating an oval shadow path or a shape based on the UIImageView (which is a rectangle).
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.
thanks for your comment. I know we can use Photoshop. But my shape is dynamic and could be any shape in the App, so the shadow cannot be pre-rendered.
|
STACK_EXCHANGE
|
JNLP also states that this application can run offline and should be updated as a background process. The technology enables seamless version updating for globally distributed applications and greater control of memory allocation to the Java virtual machine. If you want to use Java applications on your computer, you’ll need to download and install the latest version of Java on your system.
You can always convert a document back to rich text by selecting Format → Make https://www.dbsolar.ro/2023/03/21/unlocking-hidden-features-a-comprehensive-guide-to Rich Text when you are not using TextEdit for HTML. Just make sure you have installed the corresponding version of .NET Framework, .NET Core, Windows Azure, Mono or Xamarin. By importing the csv module in Python, you can easily write lists to CSV files using writerows() method. Florencesoft TextDiff reports the differences, but does not allow changes to be edited or merged. We can compare two text files using the open() function to read the data contained in the files. The open() function will look for a file in the local directory and attempt to read it.
Java uninstallation and reinstallation of Java could fix. How to repair the operating system and how to restore the configuration of the operating system to an earlier point in time in Windows Vista . Blog about the marketing online, excavator, embedded, Electronic Component, computer, fixed error, Microsoft word. When I click the Update Now button from the Java Control Panel, it complains about the system being ‘offline.’ What does that mean?
How to compare document text using Windows 10
Why would you have anything else when Markdown is so easy to write, format and plays nice with HTML. But one of the biggest gripes of NPP is that it doesn’t support Markdown natively, you’ll have to define the language. In Build139 or later, the 64 bit version is also available from the setup folder, please select and install either the 64 bit or the 32 bit version, depending on your operating system . To do so, click the gear icon in the NppFTP menu bar and select Profile settings.
- If you’re wondering how to make Notepad dark, check out this post that includes numerous dark-themed alternatives to Notepad.
- It will match foobar, but will pretend that only bar matches.
- This instructs PowerShell to start a process with the ‘cmd’ program using the parameters in quotes.
Dark mode makes the appearance elegant, and it reduces eye strain. The new Notepad will adapt to default system theme preference, but now users can switch to dark mode in the Notepad from Notepad settings. For starters, Microsoft has combined the text search tool and the find and replace tool in Notepad to make it a singular window. In the current version of the app, the text search and the find and replace tools appear as different pop-up windows and are bonded with different keyboard shortcuts. This redesign is expected to make the app more user-friendly. While the functional aspects aren’t seeing major changes, there are a few minor yet notable updates that will make the Notepad app in Windows 11 much more modern and easy-to-use.
I also cleaned my paintbrush during the waiting period between coats so it would not harden my brush. The mixture can also be reheated in the microwave again as needed for about 10 seconds if it’s getting too firm during this process. THEN, I found it’s also helpful to have a piece of “waste” paper at the front and back of the pad to catch the drips when you apply the DIY padding compound. I actually used a piece of cardstock for my waste paper on the front, and just a regular piece of paper for a waste sheet on the back. I used 12 sheets of paper cut into four, so that each notepad had 48 pages in it. And although it’s not absolutely necessary, I discovered it’s nice to have a piece of cardstock at the back of the notepad too.
It also marks the differences with color making it easier for you to make the comparison. It is the most useful text comparison tool, especially for programmers, as it supports different repositories to track the versions. In addition, syntax highlighting and better navigation with keyboard shortcuts make it a superb choice.
|
OPCFW_CODE
|
郭文, 刘其贵, 丁昕苗(山东工商学院信息与电子工程学院)
目的 针对模糊行人特征造成身份切换的问题和复杂场景下目标之间遮挡造成跟踪精度降低的问题,提出了一个AIoU-Tracker多目标跟踪算法。方法 首先根据骨干网络检测头设计了一个特殊的AIoU(Adaptive Intersection over Union)回归损失函数,它从重叠面积、中心点距离和纵横比三个方面去衡量,缓解了由于模糊行人特征判别性不足造成的身份切换带来的困扰。其次提出了一种简单有效的层级(hierarchical)关联策略,在高分检测框和低分检测框分别关联之后,充分利用关联失败检测框周围的嵌入信息再次进行关联,提高了在遮挡条件下多目标跟踪的关联精度。结果 通过一系列的对比实验,提出的AIoU-Tracker跟踪方法对比于FairMOT跟踪方法在MOT16数据集上,HOTA(Higher Order Tracking Accuracy)值由58.3%提高至59.8%,IDF1(ID F1 Score)值由72.6%提高至73.1%,MOTA(Multi-Object Tracking Accuracy)值由69.3%提高至74.4%;在MOT17数据集上,HOTA值由59.3%提高至59.9%,IDF1值由72.3%提高至72.9%。结论 本文所提出的特征平衡性跟踪方法,使边界框大小特征、热图特征和中心点偏移量特征在训练测试中达到了更好的平衡性,使多目标跟踪结果更加准确。
Multi-object tracking using adaptive-IoU loss and hierarchical association
Guo Wen, Liu Qigui, Ding Xinmiao(School of Information and Electronic Engineering,Shandong Technology and Business University)
Objective Multiple Object Tracking (MOT) belongs to a mainstream task in computer vision, which aims mainly to estimate the tracklets of multiple objects in videos and has important applications in the fields of autonomous driving, human-computer interaction, and human activity recognition. A large number of methods focus on improving the tracking performance based on the given detection results. Re-ID based trackers can be divided into two categories: Separate Detection and Embedding (SDE) tracking models and Joint Detection and Embedding (JDE) tracking models. The SDE tracking model tunes the detection model and the Re-ID model separately to optimize the model, but this leads to the disadvantage that the SDE tracking model cannot perform real-time detection. The JDE tracking model performs object detection while outputting the object location and appearance embedding information for the next step of object association, thus improving the algorithm"s operational speed. However, the JDE tracking method suffers from the problem of identity switching due to ambiguous pedestrian features and the degradation of tracking accuracy due to occlusion between objects in complex scenes. To address these issues, an AIOU-Tracker multi-object tracking algorithm is proposed. Method Firstly, we utilize the backbone network detection head to design a special AIoU regression loss function that measures the overlap area, center point distance, and aspect ratio. This helps alleviate the problem caused by identity switching due to ambiguous pedestrian features. Secondly, a simple and effective hierarchical association method is proposed to leverage the embedding information around association failure detection frames for Re-ID. The high-score detection frames and low-score detection frames are associated separately, improving the association accuracy of multi-object tracking under occlusion conditions. We utilize a variant of the DLA-34 network architecture as the backbone network. The model parameters are trained on the COCO dataset and used to initialize the model. The experiments in this study are conducted on a system running Ubuntu 16.04 with 64GB of memory and a GTX2080Ti GPU. The software configuration includes CUDA 10.2. We train the model using the Adam optimizer for 30 epochs, with an initial learning rate of 10-4. The learning rate is decayed to 10-5 after 20 epochs, and the batch size is set to 16. We apply standard data augmentation techniques, including rotation, scaling, and color jittering. The input image size is adjusted to 1088×608, and the feature map resolution is set to 272×152. We evaluate our approach on the MOT Challenge benchmark, specifically the MOT16 dataset and the MOT17 dataset. The experiments utilize various datasets, including CrowdHuman, MIX dataset (ETH, CityPerson, CUHKSYSU, Caltech, and PRW). The ETH dataset and CityPerson dataset only provide bounding box annotations, so we only train the detection branch on these datasets. The Caltech dataset, MOT17, CUHKSYSU dataset, and PRW dataset provide both bounding box positions and ID annotations, allowing for training of both branches. To ensure a fair comparison, we remove the overlapping videos between the ETH dataset and the MOT17 test dataset. The CrowdHuman dataset only contains bounding box annotations, so we perform self-supervised training on it. To evaluate the tracking performance, we use several well-defined metrics, including Higher Order Tracking Accuracy (HOTA), Multi-Object Tracking Accuracy (MOTA), ID F1 Score (IDF1), False Positive (FP), False Negative (FN), and Number of Identity Switches (IDs). MOTA primarily assesses the performance of the detection branch, IDF1 evaluates identity preservation, focusing on the association performance, while HOTA provides a comprehensive evaluation of both the detection branch and the data association performance. Result The performance of our method is compared to existing methods on two datasets. The comparative results are as follows: 1) Our HOTA value is 59.8% on the MOT16 dataset, which is increased by 1.5% compared to the FairMOT. Our MOTA value is 74.4% on the MOT16 dataset, which is increased by 5.1% compared to the FairMOT. Our IDF1 value is 73.1% on the MOT16 dataset, which is increased by 0.5% compared to the FairMOT. 2) The HOTA value is 59.9% on the MOT17 dataset, which is increased by 0.6% compared to the FairMOT. The IDF1 value is 72.9% on the MOT17 dataset, which is increased by 1.6% compared to the FairMOT. Additionally, we conduct ablation studies on the MOT17 dataset to verify the effectiveness of different components in our method, which demonstrates that the proposed method significantly alleviates the competition in multiple object tracking. In the ablation studies, we observe a decrease in the number of identity switches through the added adaptive-IoU regression loss function. We also visualize the predicted Re-ID feature extraction positions, bounding box size feature, heat-map feature, and center point offset feature. The visualization results show that our method is more robust compared to FairMOT. Moreover, our hierarchical association method makes the association more robust. For example, even after two frames, obscured IDs can still be associated. Conclusion The proposed feature balancing tracking method achieves better balance among the bounding box size feature, heat-map feature, and center point offset feature during training and testing, resulting in more accurate multi-object tracking results. In this study, we propose two improvement measures for the FairMOT framework. Firstly, we design an AIoU regression loss module to optimize the detection branch, enabling it to optimize targets based on the current optimal distance and extract more accurate appearance features. Secondly, we optimize the Re-ID branch through a hierarchical association strategy module, utilizing three-level matching to enhance the tracking system"s association performance. Experimental results demonstrate significant improvements on the MOT17 dataset, with HOTA increasing to 59.9%, IDF1 increasing to 72.9%, and MOTA increasing to 70.8%. However, there is a competition issue between the detection and Re-ID branches in the JDE tracking model, which can lead to a decrease in MOTA. Future research will focus on investigating this competition in the JDE tracking model.
|
OPCFW_CODE
|
Hannah Dennes and her family moved into Wingrave Street, Googong, about six years ago. Although they knew their neighbours well enough to wave to, they weren’t close – until this week.
Turns out, the sight of a red-bellied black snake in your backyard can be a great bonding experience.
The snake was spotted on the roof of Hannah’s neighbours, Troy and Pat Coelho, who live about three houses down. It fell off and slid down the drain – apparently chasing a frog. It made for a sleepless night, with everyone wondering where it had slithered to.
“The next day when I was driving out of the driveway, I saw it across the road,” Hannah said. “I called the snake catcher and from then on, we were on snake watch.”
Hannah said Gavin Smith from ACT Snake Removals told her, whatever she did, not to take her eyes off the reptile, so she didn’t, enlisting the help of another neighbour for the task.
“I reckon we watched it for more than an hour,” she said. “It started at one end of the street and then went into the backyard there, then moved to the next house,” she said.
“I knew there was a dog there so I raced in to get the dog out and put him in my place.”
Hannah said the closest she got to the snake was about a metre – and it wasn’t by choice.
“I had to walk past the snake to get the dog out,” she said.
“Then we saw the snake go through to the next neighbour’s garage. I went in and told them to shut the door from the garage to the house.”
Hannah said it was very lucky that almost everyone in the street was home that day, so they could help with the snake watch.
“You get to know people well, and bond with them, when you’re watching for a snake,” she said.
“You don’t really have time to be scared, although I must say I was a little frightened when you know it’s out there but you don’t know where it is.”
Although her neighbours praised Hannah for her quick-thinking, she described Gavin as the man of the hour.
“He really was the hero of the day,” she said. “But you had to laugh. We spent more than an hour watching the snake and he came over and had it in a the bag within a matter of seconds.”
It was then taken off to Googong Dam, away from people and dogs.
But the whole encounter left residents of Wingrave Street with a new sense of community – and an apology from Pat Coelho, whose roof the snake first moved on to.
“So this snake just jumped off my roof,” he posted on social media. “Sorry to anyone who heard the colourful language coming from my house.
“Shout out to Hannah. Thanks to her eagle eye, the snake handler arrived just in time to catch the snake that fell from my roof, resulting in the entire neighbourhood now knowing my safe word.”
|
OPCFW_CODE
|
The patent's assignee for patent number 8793287 is
News editors obtained the following quote from the background information supplied by the inventors: "The present invention relates to database operations, and in particular to equi-join operations among split tables.
"Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
"A common database operation in a relational database is the join operation. Generally, a join of two data sources creates an association of objects in one data source with objects that share a common attribute in another data source. A typical data structure for data sources is a data table (or simply, table) comprising rows and columns. Each row (or record) of the data table represents an object. Each column represents attributes of the object. For example, a data table may be defined for inventory in a retail store. The inventory items (e.g., pants, shirts, toasters, lamps, etc.) may constitute the objects represented by the data table. The attributes of each item may include such information as the name of the item, the number of items at that store, the location of the item in the store, and so on. Instances of an attribute are referred to as 'attribute values', 'actual values', or simply 'values.' An example of such a data table is shown in FIG. 14A, where each row 1402 represents a store item. Each row 1402 comprises attributes of the item columns 1404a-1404c. Each row 1402 may include an ID attribute 106 that identifies the row. For example, the ordinal position of a row 1402 in the data table may be used as the ID attribute.
"FIG. 14B shows an example of another data table called Mail-Order. A join operation between the Inventory and Mail Order data tables can be performed. For example, consider a so-called 'equi join' type of join operation where the join condition (join predicate) specifies a relationship (e.g., equality) between attributes that are common to both data tables. Suppose the join condition is: items in the Inventory data table that are the same as the items in the Mail-Order data table. For example, the join expression might be formulated as 'Table Inventory inner join Table MailOrder on Inventory.Item=Mail-Order.Item'.
"An execution plan (query plan) for performing the join operation may include the following steps: 1. read out a row from the Inventory table 2. compare the actual value of the Item attribute in the row that was read out from the Inventory table with the actual value of the Item attribute in a row of the Mail-Order table 3. if there is a match, then output the row that was read out from the Inventory table and the matching row in the Mail-Order table 4. repeat steps 2 and 3 for each row in the Mail-Order table 5. repeat steps 1-4 for each row in the Inventory table A result of the join operation can be represented by the data table shown in FIG. 14C.
"A database may comprise data tables that contain thousands of records each. In addition, records may have tens to hundreds of attributes each, and the actual values of some attributes may be lengthy (e.g., an attribute that represents the name of a person may require an allocation of 10-20 characters of storage space). Such databases can impose heavy requirements in the storage of their data. Accordingly, a practice of using dictionaries has arisen, where the actual values (e.g., 10-20 characters in length) of instances of an attribute in the data table are replaced by (or otherwise mapped to) an associated 'value ID' (e.g., two or three bytes in length).
"Consider the Inventory table and the Mail-Order table, for example. The actual values for instances of the Item attribute in the Inventory table include 'pants', 'shirts', 'toasters', and 'lamps'. A dictionary can be defined for the Item attribute. For example, the dictionary may store the actual values of the Item attribute in alphabetical order and the value IDs that are associated with the actual values might be the ordinal position of the actual values in the dictionary.
"An actual value in the data table is represented only once in the dictionary. For example, the actual value 'lamps' occurs in twice in the Mail-Order table, but there is only one entry in the dictionary; thus, the dictionary might look like: lamps pants shirts toasters The value ID associated with the actual value 'lamps' could be 1, being located in the first position in the dictionary. The value ID associated with the actual value 'pants' could be 2, being the second position in the dictionary, and so on.
"FIG. 15 shows the Inventory and Mail-Order tables of FIGS. 14A and 14B, modified by the use of a dictionary, more specifically a central dictionary. In particular, the actual values for instances of the Item attribute in the data tables (i.e., text) have been replaced by their corresponding associated value IDs (i.e., an integer). It can be appreciated that the use of dictionaries can reduce the storage burden of large databases.
"The distribution of databases across separate database servers is commonly employed, for example, to distribute the storage burden across multiple sites. In a distributed database configuration, one or more constituent data tables of the database are partitioned (split) into some number of 'partitions,' and the partitions are distributed across many database servers. While the processing of certain queries in a distributed database configuration may be accomplished using only the data within a given partition of a data table, queries that involve a join operation require access to data from all of the partitions of the data tables being joined.
"The execution plan of a join operation involving split (partitioned) data tables conventionally involves communicating the actual values of the attribute(s) specified in the join condition among the partitions in order to evaluate the join condition. One can appreciate that the execution plan may therefore entail a significant amount of data communication among the constituent partitions. As explained above, a dictionary can be used to reduce the space requirements for storing attribute values. Accordingly, each partition may be provided with its own local dictionary (rather than the central dictionary indicated in FIG. 15), the idea being that the associated value IDs can then be communicated among the partitions instead of the actual values. However, the value IDs in a given local dictionary are generated independently of the values IDs in the other local dictionaries. In other words, value IDs locally generated in one partition of a data table may have no correlation to value IDs locally generated in another partition of that data table. Suppose, for example, the Item attribute is specified in a join condition. Suppose further that the actual value 'pants' has a value ID of 2 in the local dictionary of one partition, a value ID of 7 in the local dictionary of another partition, and a value ID of 15 in yet another partition. The execution plan for the join operation may communicate the multiple different value IDs for 'pants' (i.e., 2, 7, 15) among the partitions. However, the value IDs would be meaningless in any one partition for the join operation because value IDs only have meaning for the partition in which they were generated. For example, while the value ID 2 may be associated with 'pants' in one partition, the value IDs 7 and 15 do not, and in fact very likely may be associated with completely different items; the value IDs could not be used to perform a join operation.
"These and other issues are addressed by embodiments of the present invention, individually and collectively."
As a supplement to the background information on this patent, VerticalNews correspondents also obtained the inventors' summary information for this patent: "In embodiments, a join operation between a first split data table and a second split data table includes receiving reduction data from each of first partitions of the first data table. In a second partition, actual values of a join attribute that occurs in the second partition which also occur in one of the first partitions is assigned a global ID. A globalized list for the second partition includes a Doc ID that identifies a data record in the second partition for which the actual value of the join attribute also occurs in one of the first partitions. The corresponding global ID is associated with that actual value. Each first partition receives a table of global IDs that are associated with actual values in the first partition. Each first partition creates a globalized list that includes a Doc ID identifying data records in the first partition for which the actual value of the join attribute is identified by a global ID in the received table. The join operation can then be performed using the globalized lists of the first partitions and the globalized lists of the second partitions.
"In an embodiment, a computer system can have stored therein executable program code configured to cause the computer system to perform the foregoing steps.
"In embodiments, the same global ID may be associated with an actual value that occurs in the second partition and in at least one or more of the first partitions.
"In embodiments, multiple occurrences of an actual value in the second partition are associated with the same global ID.
"The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of the present invention."
For additional information on this patent, see: Peh, Thomas; Schwedes, Holger; Stephan, Wolfgang. Equi-Joins between Split Tables. U.S. Patent Number 8793287, filed
Keywords for this news article include:
Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC
Most Popular Stories
- Frightfully Fun Films Return for Halloween
- Cloud Lifts Microsoft's Quarterly Results
- Hollywood Eager to Grasp Hispanic Market
- Pfizer Approves $11 Billion Buyback Plan
- Would Soccer Be Richer Without Small Clubs?
- IS Funded by Black Market Oil Sales, Racketeering
- Weekly Jobless Claims Rise but Remain Low
- Jennifer Aniston, Justin Theroux Set the Date
- Teresa Giudice Must Serve Time in Prison
- Sears Denies Store Closings, Layoffs Report
|
OPCFW_CODE
|
Easiest way to bread chicken for frying?
I usually toss the flour and chicken in a bag and shake it up. The problem is that pieces of chicken will often stick together and not get evenly coated. Is there an effective way to bread my chicken evenly without getting clumps of chicken?
related : http://cooking.stackexchange.com/q/30113/67
One piece at a time? Into the bag, shake to coat, out of the bag, shake to remove excess flour, put aside for subsequent egg-washing and breading once all pieces are done.
Also helps to have one hand only touching dry ingredients, the other only wet.
I dump the chicken in a large plastic container and (as ferronrsmith) sieve the flour on top of the chicken. Then, I put the lid on and give it a good shake.
It helps if the chicken is dry. (Dried with paper towels after washing.)
Another meat-washer! Why?!
@ElendilTheTall, excellent question! I do it to get rid of excess blood, basically.
I don't use the bag method to bread chicken, or other foods. I prefer the slightly more manual, but very effective traditional method:
Put the breading mix (for example, seasoned flour) into a shallow dish, such as a pie plate or a shallow casserole
Place one or several (as many as comfortably fit) pieces of chicken into the breading mix, then turn them over and place back down. You might need to do the sides too, for large pieces. You can pick up some mix with your fingers and put it onto any uncovered spots, too.
When you remove the chicken, shake it slightly above the mix to let extra come off and be reusable.
This method is also extensible to more complicate breading techniques such as flour, then egg wash, then breadcrumbs. You simply have three pie plates, one for each layer in the breading, and move the foods through the layers.
If you are doing this sort of dry/wet/dry breading, it helps to use one hand only for the dry stages, and the other hand (or tongs) for the wet stage.
In comparison to the bag method, this technique has the following advantages:
You can directly control and monitor the breading on each piece of food
No reasonable way for the pieces to stick together during the breading process
It scales up to any amount of food easily in an assembly line
You can do multiple layers, including wet layers, conveniently
There is no danger of a bag splitting and and getting flour all over the kitchen
... and the following disadvantages:
For the small quantities of food, it may be a little more work
The pie plates have to be washed, whereas the bag can probably be discarded
Your hand(s) get mucky unless you use tongs at every stage
Note: this answer assumes small quantities, as in home cooking. Restaurant production also uses this method, but scaled up in a couple of ways. I have never heard of nor seen a commercial kitchen use the bag method. Now, at industrial scales, they have some cool devices.... :-)
You can use a sieve :). That's what I always do and it is always evenly coated with flour. The bag create lumps, try to avoid that if you can. try sifting the flour also :)
It might be helpful to the OP if you were a little more specific about what using a sieve means - I can think of variations people might try.
I am puzzled by your last sentence... do you also sift the chicken? :D
Sifting is the process of using a sieve to remove lumps and to filter large particles. An example of the sifting process is here : http://mayblerose.files.wordpress.com/2010/07/sponge-cake-sift-flour.jpg
"sift the flour" was already reasonably clear, though not to everyone - but "use a sieve" definitely still isn't.
So basically you'd place the flour in the sieve and as u sift the flour with the chicken under the coat of flour will be added. Keep repeating this process until you have covered the entire chicken. If you want to apply other bases (like oats, makes some egg batter the dip it, then dip it in bowl of oats... i like that)
If you're going to use the bag method, you may want to use a clear plastic bag, and only drop in between 1 and 3 items at a time (exact number depends on the size of the items relative to the size of the bag).
If you do end up getting two items stuck together, just grab one of the items through the bag, and shake until the other item falls away.
If you're still having problems, set up a regular breading station
It helps to sift the flour. I use a colander for breading. I just did this with pickles and then again with chunks of chicken and it works great, just make sure there is a pan under the colander when you are breading. Put whatever you are breading in plastic bag first, shake it up, don't use a lot of flour, dump into the colander, shake it up and bam, no mess no fuss.
The OP specifically said the problem is in coating the chicken in the bag, not afterward.
I've edited your answer to remove the things that were meant to be comments on ferronrsmith's answer (rude ones, too). Also, at least in the US, colanders are usually a bowl with decent-sized holes in it while sieves are metal mesh, and generally finer.
|
STACK_EXCHANGE
|
on July 14, 2004
This book is breezy enough in tone and works well enough as a programmer's reference but the rah-rah Access! delivery grates after a while. The authors are old school in that they introduce an end of life data access model (DAO) at length and then make endless and confusing references to it while they document the current initiative, ADO. (They spend 64 pages on DAO--a legacy object model useful only for desktop databases and 46 pages on ADO, the preferred model, an evolving technology for use with everything from desktop to enterprise.) These topics should be treated in isolation from one another as a side-by-side comparison is just unnecessarily confusing, especially for users of Access 2000/XP/2003. It's almost like their only intent is to prove how long they've been Access developers instead of providing the most streamlined and useful documentation--in other words, it's more about them than it is you. They're also big on their own received wisdom as opposed to accuracy like encouraging readers to document their code as if they were writing a bible because there is no speed hit involved--this isn't necessarily true. They also have a bad habit of not declaring variables in their sample code (with annoyingly trendy scenarios like "how many lattes can I buy?"), which is a cause for concern in a book about programming. There is useful, if overenthusiastic, coverage of the new features in Access 2003. If you're comfortable with the series and need a lengthy reference to complement the Microsoft Access help files, you'll find useful information here...with large swaths of information (hundreds of pages at a time) that you'll probably never use.
on June 12, 2004
I am writing this review because I believe the expectations created by the book description and reviews were not met.
I do not disagree that the book is comprehensive as the description and the reviews point out. It discusses subjects that are wide-ranging and, I am certain, very important to many readers.
My contention is that the book is not organized in such a manner as to allow a less experienced user to get at what he wants. I think books like these need to better indicate for whom they are written. The foreward talks about it being for those who have been using Access for some time and are just beginning to jump into the world of code. I have had a reasonable amount of experience with Access, relational databases and electronic spreadsheets. I have a decent understanding of programming although only little expereince with object oriented programming.I feel that I understand Access sufficiently well that I have a base upon which to add the VBA skills. I really don't believe that the book is written for such a person.
My opinion of the shortcomings follow.
I believe the most important component of a reference source is the index. In this case the index is 24 pages with 100 entries on each. The frustration I faced in trying to answer my questions through the index, and table of contents was as bad or worse than trying to use Microsoft help screens or on-line resources.
I do not understand the organization of the book. It does not seem to follow a path from broad to narrow or any other progression that makes intuitive sense. I know this is acceptable in a reference but only, I would say, if there was a sufficient index.
The general execution of the book was also inferior. I was frequently distracted by grammar errors such as the wrong word (eg "than" instead of "that") and even typos. The figures and tables were numbered but not titled and in at least one case was misreferenced from the text. I found myself really in need of a diagram in many cases but there were none to be found. The car analogy used to explain Object Oriented Programming was not clear and I think possibley incorrect. The analogy describes "Press" as a method of the "Gas Pedal" object when it seems to me to be more like an event triggered by the user. I am not saying that I am necessarily right or the authors definitely wrong. I'm just saying that the analogy was ineffective. Again, I would have loved (and expected) a diagram or some representation that would explain this difficult concept.
I want to say that in the end I did find the elusive answer to my question although just by chance. I think this will illustrate my frustration with this book. I wanted to know how to automate the import of a TXT file. I had searched on-line help and 2 other Access books as well as this book but could not find the answer. I finally was skimming through the methods of the DoCmd Object in Appendix E (The Access Object Model) and happened on the DoMenuItem method (page 750.) The description of this method explained that it executes the specified menu item but that it was a legacy from Access 97 that had been replaced by the RunCommand method in later version of Access. I then flipped forward in the same appendix to the RunCommand Method Arguments section (page 793) which started with this comment: "One of the easiest ways to perform a variety of functions in Microsoft Access is through use of the RunCommand Method." I thought that if it was so easy, then why doesn't somebody include it in their discussion of VBA. (I don't know if this book does so because the only reference to the RunCommand in the index is to the Appendix pages noted above. There are no references to the DoCmd or menus in the index.)
When I found this discussion, I knew that I had answered my question but I still had more errors to deal with. The section was entitled RunCommand Method Arguments but the verbage that preceded the table stated "The RunCommnad takes a single argument, the acCommand constant. All of the available acCommand constants are listed in the following table." The column in the table that follows is entitled "Argument." I know that constants can be arguments but don't these constants relate to the acCommand which is the only argument of the RunCommand? I know this is picky (and I feel more than a little self-conscious for exposing my ignorance) but doesn't a book about language need to be precise and consistent in their use of terms?
I would also ask what the value is of this (the RunCommand Methods Arguments) table. It simply lists the constants (Arguments [?]) This information is available and more easily accessible in on-line help. The inclusion of this table (and possibley others) may make the reference more complete but does not necessarily make it more valuable. At the risk of beating a dead horse, I feel this table illustrates another shortcoming of this text in that it isn't even organized very well. The 9 page table has two columns that list the constants alphabetically. The first column starts on page 793 with acCmdAboutMicrosoft and finishes on page 802 with AcCmdPivotChartDrillInto. The list continues in column 2 back on page 793. I know this is confusing but so is the book.
I have no doubt that the authors of this book are extremely knowledgeable (and even really good and helpful people.) I would love to know what they know. That is why I bought this book. I can't imagine the difficulty in trying to organize and explain such a large and complex model but that is what they have attempted to do. I would love to see this book reworked and represented in a better way because I think think it contains a lot of good information. I just don't think enough consideration was given the user or enough care was taken in the execution.
on May 19, 2004
I have to agree with reviewer Paul E. that the authors did an outstanding job on this book.
The book is comprehensive, covering just about everything you need to know when you're working with Access application development.
For me, the 250 pages or so of appendixes have been the most useful, but there's also a sixty-page chapter on database security in the middle of the book that should be required reading for all Access developers.
Here's the table of contents from my copy of the book:
1. Intro to Access.
2. Access, VBA, and Macros.
3. New Features in 2003 (and 2002).
4. VBA Basics.
5. Using the VBA Editor.
6. Using DAO to Access Data.
7. Using ADO to Access Data.
8. Executing VBA.
9. VBA Error Handling.
10. Using VBA to Enhance Forms.
11. Enhancing Reports with VBA.
12. Creating Classes.
14. SQL & VBA.
15. Working with Office Applications.
17. Understanding Client-Server Development with VBA.
18. Windows Registry.
19. Using the ADE Tools.
20. Macro Security.
Appendix A: Upgrading to Access 2003.
Appendix B: References for Projects.
Appendix C: DAO Object Method and Property Descriptions.
Appendix D: ADO Object Model Reference.
Appendix E: Access Object Model.
Appendix F: Windows API Reference Information .
Appendix G: Naming Conventions.
Appendix H: VBA Reserved Words.
Appendix I: Tips and Tricks.
Appendix J: ADO Object Argument Information.
Appendix K: Access Wizards, Builders and Managers.
Appendix L: Windows Registry Information.
on April 13, 2004
This is a very special book. The authors obviously went way out of their way to include everything that you could need to be a successful Access developer. As far as I know, this extensive reference material has never before been assembled in one place. As a result, it is just the right thing for aspiring intermediate and advanced users of Microsoft Access. Even many developers with extensive experience will be surprised at what can finally become clear to them by checking out what this reference makes available. You'd have to search the Microsoft and other sites for weeks to assemble all this on your own.
This well-written reference covers everything from VBA basics to using the new Access Developer Extensions, which are part of Visual Studio Tools for the Microsoft Office System. In between it also covers error handling, enhancing forms, enhancing reports, SQL coding, and even working with the Windows Registry. The appendices go all the way from A to L, or from Upgrading of Access (what other book covers this subject at all, never mind so thoroughly?) to VBA Reserved Words.
I have never seen such comprehensive reference material! And is it loaded with tips, tips, tips! The authors have obviously lived in the trenches with Access for some time. They know what to watch out for. They know what kind of protocols you should set up yourself to boost your success.
It took the experience of four co-authors and several other contributors to bring this work together. Patricia, Teresa, Graham and Armen have provided a unique Access reference that will get you up and going fast and save you scads of time that you would otherwise have wasted learning "the hard way" or, at best, wasted digging through a multitude of written and online references. This is a special, one-of-a kind reference work!
|
OPCFW_CODE
|
- Support for the WS-* stack includes WS-Addressing, WS-Policy WS-Security, WS-SecurityPolicy, WS-ReliableMessaging, WS-Eventing, and SOAP Message Transmission Optimization Mechanism (MTOM).
- Inversion of Control (IOC) container support – WSF/Spring enables Spring services to be exposed through an IOC container. Additionally, it offers support for editing the Axis2 booting configuration through the IOC container.
- Automated WSDL generation via the Axis2/Java code generation tool lets developers generate code for both WSDL 1.1 and WSDL 2.0. Data binding is also available with Axis Data Binding (ADB).
- Querying service support – WSF/Spring supports a querying service’s WSDL via "?wsdl", schema with "?xsd", and policies with "?policy".
- Method exclusion in Spring beans – Going beyond just exposing Spring beans, WSF/Spring allows developers to have fine-grained control over which methods get exposed as Web service operations.
WSO2 has announced the release of WSO2 Web Services Framework for Spring 1.0, which integrates Apache Axis2 into Spring. With this framework, developers can use either a code-first or contract-first approach to web services development (where WSO2 says Spring Web Services emphasizes contract-first.) The WSO2 Web Services Framework for Spring 1.0 is released under Apache License 2.0 and is based on the open source Apache Axis2/Java Web services engine, providing developers with a tested, proven platform for enterprise-class Web services that is ready to use. Key features of WSF/Spring 1.0 are:
- Posted by: Joseph Ottinger
- Posted on: April 02 2008 12:27 EDT
- Re: WSO2 releases Web Services Framework for Spring by Amin Abbaspour on April 03 2008 01:32 EDT
- Acegi integration? by Andrey Utkin on April 03 2008 02:45 EDT
- Re: WSO2 releases Web Services Framework for Spring by Roland Altenhoven on April 03 2008 04:20 EDT
- ws-transaction by olonga henry on April 03 2008 10:24 EDT
Congratulations both to team and esp. to users. Like many others I prefer code-first approach and here is where WSO2 does the best :)
Is it possible to use Acegi with WSF/WS-Security?
There is some ACEGI integration between Rampart (the WS-Security library in WSF/Spring) and ACEGI. However, I'm sure there is more we can do. It might be better to take this discussion to the project mailing list or forum: http://wso2.org/forum/462 Paul
Automated WSDL generation via the Axis2/Java code generation tool lets developers generate code for both WSDL 1.1 and WSDL 2.0. Data binding is also available with Axis Data Binding (ADB).Congratulations - a good work, resulting in very nice features. Existing plans to support in the feature additional Stacks, e.g. Apache CFX or Metro ? Roland SOA Competence Network
does WSO2 support WS-Transaction?
|
OPCFW_CODE
|
There are three ways to watch these lectures:
(1) A good way is to watch the lectures and use the Udacity discussion forums to interact with other people watching the lectures.
(2) Something better is that although these lectures are free and don't really require any books or texts, there are books or texts that I would feel really guilty not telling you about. These include Alexander Osterwalder's "Business Model Generation" book for understanding the Business Model Canvas, and the Startup Owner's Manual, written by yours truly and my co-author Bob Dorf, that is kind of a standard for customer development. Let me emphasize again, these are not required and you can understand the lectures just fine without them but they'll surely help explain a lot of the detail in the 617 pages because they're almost an encyclopedia for startups. Enough of the ad, that's the last time I'll mention buying the texts or books.
(3) The best way to watch these lectures is, instead of just watching them, do everything in (1) and (2) above and also form a startup team and get outside of the building in between watching the lectures so you're actually watching the lectures as a guide for what you are supposed to be doing week to week. One of the reasons we suggest that instead of just watching the lectures you actually get out of the building and do it either by yourself or with your team is because startups are not about lectures, and entrepreneurship is not about your grades and this class, they're about the work you do outside the building after you watch the lectures-- not how much you watch the lectures. Entrepreneurship is experience, it's hands on, and it's immediate and intense feedback.
If you're doing more than just watching these lectures, one of the things you're going to be staring at a lot is the Business Model Canvas. If you're watching this video, the first thing I suggest is stop, go to businessmodelgeneration.com, and download the canvas and print out a bunch of copies. Then go get a bunch of yellow stickies and a red pen because that's what you're going to be using for the next couple of weeks. Now, what we actually do with the Business Model Canvas is in week 1. You use those yellow stickies to put up all your hypotheses and you probably don't have many facts yet but you can make some pretty good guesses about what's a value prop, where the customers are, what your channel is and what your pricing is, but what more you can do is really interesting. It's as you come back in with facts, we're going to be marking up the canvas and crossing out the things that actually weren't true and replacing them with the things we've learned, and we're going to do that week after week until what we end up with is a series of Business Model Canvases, and what you'll find out is at the end of the class, you can actually play back these canvases almost like a filmstrip and see what you've learned over time.
An optional piece of software is something called launchpad central at www.launchpadcentral.com. The launchpad central allows you to share your work, your customer discovery narrative, what you're doing outside the building with mentors and instructors and everybody else. Some of you might think, "Well, I don't want to share what I'm doing," but it's not sharing your IP or your great ideas--it's actually sharing what you found out in testing your hypothesis and it allows others to comment in real time. So in fact, you won't have to sit alone thinking about "Am I on the right path or is this the right strategy, etc." The launchpad central comes with the startup weekend program, which is available to anybody else here at launchpadcentral.com. It is not required for this class but is a great resource if you're interested.
So to prepare for watching these lectures, of course, you could just hit the play button but actually it would be helpful, if you actually do some reading. And as I said earlier, there's optional reading, and you'll see this going forward, after each lecture, and a Startup Owner's Manual pp. 1-50, or in the strategy volume if it's an e-book. And in Business Model Generation pp 14-49, completely optional. But what's available for free is if you want to get a feel of the core strategy how we're trying to teach this and how we teach this in real life, you could go to steveblank.com category/Lean LaunchPad or just click on the link. You could also look at other students' presentations and we have our students make 2-min quicktime videos which are summaries of their presentations as well and you could see those at steveblank.com/slides but if you want to actively engage in the class, your first real homework is download that Osterwalder canvas and using yellow stickies, take a shot at actually filling out all your hypotheses on the business model canvas. It's okay if you're just guessing because the whole class is about how you turn those guesses into facts. So get ready, download the canvas, put on those yellow stickies, and then looking at those boxes, start figuring out who would you call on and who would you contact to start testing those hypotheses and take a look for the first week at your value proposition and customer segment box. Who would you want to talk to, to validate your ideas about what features are important and who would you talk to to validate that these are customers you want to go after and we're just getting you ready for the work you're going to do after the next lecture. See you soon!
|
OPCFW_CODE
|
Understanding how to identify the social determinants of health from electronic health records (EHRs) could provide important insights to understand health or disease outcomes. We developed a methodology to capture 2 rare and severe social determinants of health, homelessness and adverse childhood experiences (ACEs), from a large EHR repository.
We first constructed lexicons to capture homelessness and ACE phenotypic profiles. We employed word2vec and lexical associations to mine homelessness-related words. Next, using relevance feedback, we refined the 2 profiles with iterative searches over 100 million notes from the Vanderbilt EHR. Seven assessors manually reviewed the top-ranked results of 2544 patient visits relevant for homelessness and 1000 patients relevant for ACE. word2vec yielded better performance (area under the precision-recall curve [AUPRC] of 0.94) than lexical associations (AUPRC = 0.83) for extracting homelessness-related words. A comparative study of searches for the 2 phenotypes revealed a higher performance achieved for homelessness (AUPRC = 0.95) than ACE (AUPRC = 0.79). A temporal analysis of the homeless population showed that the majority experienced chronic homelessness. Most ACE patients suffered sexual (70%) and/or physical (50.6%) abuse, with the top-ranked abuser keywords being “father” (21.8%) and “mother” (15.4%). Top prevalent associated conditions for homeless patients were lack of housing (62.8%) and tobacco use disorder (61.5%), while for ACE patients it was mental disorders (36.6%–47.6%). We provide an efficient solution for mining homelessness and ACE information from EHRs, which can facilitate large clinical and genetic studies of these social determinants of health.
After participating in this activity, the learner should be better able to:
- Engage in information extraction for social determinants of health
- Understand deep learning in clinical NLP
Cosmin Adrian Bejan, PhD
Assistant Professor of Biomedical Informatics
School of Medicine at Vanderbilt University
Cosmin Adrian Bejan is an Assistant Professor of Biomedical Informatics in the School of Medicine at Vanderbilt University. His research lies at the intersection of biomedical informatics, natural language processing, and machine learning. Currently, he is developing text mining technologies for processing narrative patient reports to identify illness phenotypes and to facilitate clinical and translational studies of large cohorts of critically ill patients. One methodology he recently devised for this purpose is based on statistical hypothesis testing to extract the most relevant clinical information corresponding to the phenotype of interest. For this study, he also developed a state-of-the-art assertion classifier for assigning assertion values to the concepts associated with a specific phenotype.
Dr. Bejan received his B.S. and M.S. in computer science from the University of Iasi, Romania and his Ph.D. in computer science from the University of Texas at Dallas. He completed postdoctoral studies at the Institute for Creative Technologies of the University of Southern California, the Department of Biomedical Informatics and Medical Education at the University of Washington, and the Department of Biomedical Informatics at Vanderbilt University.
|
OPCFW_CODE
|
HEIC isn't rotating correctly.
Are you using the latest version? Is the version currently in use as reported by npm ls sharp the same as the latest version as reported by npm view sharp dist-tags.latest?
Yes
What are the steps to reproduce?
I'm using sharp / libvips compiled with libheif. I am trying to transform the attached image, shot from an iPhone. It doesn't seem to rotate correctly.
What is the expected behaviour?
Image rotates correctly.
Are you able to provide a minimal, standalone code sample, without other dependencies, that demonstrates this problem?
const imageUrl = 'https://padlet-uploads.storage.googleapis.com/12/cd405c2b88963128442daf88cc8ccab9/IMG_0620.HEIC'
request({ uri: imageUrl, encoding: null }, async (error, response, body) => {
const img = sharp(body).rotate()
const outputBuffer = img.toFormat('jpg').toBuffer()
})
Are you able to provide a sample image that helps explain the problem?
https://padlet-uploads.storage.googleapis.com/12/cd405c2b88963128442daf88cc8ccab9/IMG_0620.HEIC
N.B. when you view the image in Mac, it'll show as portrait, but it is actually landscape.
The metadata I get from sharp for this image:
{
format: 'heif',
size: 1483481,
width: 3024,
height: 4032,
space: 'srgb',
channels: 3,
depth: 'uchar',
isProgressive: false,
pages: 1,
pageHeight: 4032,
pagePrimary: 0,
hasProfile: false,
hasAlpha: false,
orientation: 6,
exif: <Buffer 45 78 69 66 00 00 4d 4d 00 2a 00 00 00 08 00 0b 01 0f 00 02 00 00 00 06 00 00 00 92 01 10 00 02 00 00 00 0a 00 00 00 98 01 12 00 03 00 00 00 01 00 06 ... 2162 more bytes>,
isAnimated: false
}
What is the output of running npx envinfo --binaries --system?
libheif 1.5.0 and libvips latest master produces the expected width and height for me:
{
format: 'heif',
size: 1483481,
width: 4032,
height: 3024,
space: 'srgb',
channels: 3,
depth: 'uchar',
isProgressive: false,
pages: 1,
pageHeight: 3024,
pagePrimary: 0,
hasProfile: true,
hasAlpha: false,
orientation: 6,
exif: <Buffer 45 78 69 66 00 00 4d 4d 00 2a 00 00 00 08 00 0b 01 0f 00 02 00 00 00 06 00 00 00 92 01 10 00 02 00 00 00 0a 00 00 00 98 01 12 00 03 00 00 00 01 00 06 ... 2162 more bytes>,
icc: <Buffer 00 00 02 24 61 70 70 6c 04 00 00 00 6d 6e 74 72 52 47 42 20 58 59 5a 20 07 e1 00 07 00 07 00 0d 00 16 00 20 61 63 73 70 41 50 50 4c 00 00 00 00 41 50 ... 498 more bytes>
}
I note isAnimated is part of your output, but that is not from sharp, so I guess you're using a fork or wrapper, which could be the source of this problem.
I hope this information helped. Please feel free to re-open with more details if further assistance is required.
Apologies for going MIA. I can confirm this works as expected using the latest version of libheif.
|
GITHUB_ARCHIVE
|
Matlab r2014b Crack + Setup free download, from infodady.com. So it is a fantastic software for high professional tasks. Thus that include, designing and simulation creating. The word Matlab is basically combination of two different words. Number one Mat is derived from “Matrix” . The number second that is Lab is derived from word “Laboratory”. So from the initial from both the words combined together to give software its unique name “Matlab”. Hence the name of the software indicates, this software covers all basic to advance fields of engineering. So that include electrical engineering, computer science and mechanical engineering. As their all calculations can be done through this software. Know a days the fast moving word, is moving towards automation calculation, processing, presenting, and simulation of data from different technological disciplines.
Matlab r2014b Crack
As the data in figures and numbers is very vast and a much laborious and error generating, so it will be a matter of relief and more than that a matter of precision, to use an automated software for different data processing tasks. This version is very popular among the users, as compared to the second version that is 2014. The version r2014b has many modifications and up gradations in it made by its parental developers. It has the capacity for addition of more volume data in it. Thus analysis, simulation of data and other such tasks like these ones can be easily performed by engineers of system integration and code sharing. The manufacturing company of the software “Matlab”, is known as the parental, math computing software manufacturing company. The main head office of the company is situated in Massachusetts, United States of America.
Try out other stable versions of MATLAB + Crack.
This American oriented company was established in 1984 and at present has 3000 employees working in fifteen different countries across the globe. Through this software, engineers working in different multi disciplines, such as automotive, electronics, electrical, telecommunication, developing of different computing techniques, aerospace, aviation, across the globe, can speed up their work through automated calculations and simulations, for the sake of development, innovation and discovery. Matlab r2014b Crack, can be downloaded from download crack link on softwarezee.com. Thus Matlab r2014b Crack, is used to register Matlab R2014b.
Features of Mathworks Matlab r2014b Crack:
- It has extra ordinary facilitating feature for users.
- That aids in the installation of the software.
- Such as installing is a matter of seconds.
- Without any complicated steps involved in the installation of software.
- This version provide its users with an totally bright new working interface \ environment.
- For the sake of graphic designing and different programming.
- Has a TVSTH sharing feature that aids coordination of the software with GITHUB.
- Among all other comes the one handy feature that is optimized toolbox options.
- It has a built in option for date and language automatic update.
- Has an excellent ability to that it can customize the image.
- At last but not the least, comes with a larger data transaction capability.
Toolboxes included in Matlab r2014b Crack:
Matlab Version 8.4 [R2014b]. Simulink Version 8.4 [R2014b]. Bioinformatics Toolbox Version 4.5 [R2014b]. Communications System Toolbox Version 5.7 [R2014b]. Computer Vision System Toolbox Version 6.1 [R2014b]. Control System Toolbox Version 9.8 [R2014b]. Hence Image Processing Toolbox Version 9.1 [R2014b]. Curve Fitting Toolbox Version 3.5 [R2014b]. DSP System Toolbox Version 8.7 [R2014b]. Data Acquisition Toolbox Version 3.6 [R2014b]. Database Toolbox Version 5.2 [R2014b]. Econometrics Toolbox Version 3.1 [R2014b]. Financial Toolbox Version 5.4 [R2014b]. Fixed-Point Designer Version 4.3 [R2014b]. Fuzzy Logic Toolbox Version 2.2.20 [R2014b]. The Global Optimization Toolbox Version 3.3 [R2014b]. Image Acquisition Toolbox Version 4.8 [R2014b]. Instrument Control Toolbox Version 3.6 [R2014b]. Matlab Builder EX Version 2.5.1 [R2014b].
A Matlab Builder JA Version 2.3.2 [R2014b]. A Matlab Builder NE Version 4.2.2 [R2014b]. The Matlab Coder Version 2.7 [R2014b]. Matlab Compiler Version 5.2 [R2014b]. Matlab Report Generator Version 4.0 [R2014b]. Mapping Toolbox Version 4.0.2 [R2014b]. Neural Network Toolbox Version 8.2.1 [R2014b]. Optimization Toolbox Version 7.1 [R2014b]. Parallel Computing Toolbox Version 6.5 [R2014b]. Partial Differential Equation Toolbox Version 1.5 [R2014b]. Real-Time Windows Target Version 4.5 [R2014b]. Robust Control Toolbox Version 5.2 [R2014b]. Signal Processing Toolbox Version 6.22 [R2014b]. A Sim Biology Version 5.1 [R2014b]. Sim Hydraulics Version 1.15 [R2014b]. Sim Mechanics Version 4.5 [R2014b].
A Sim Power Systems Version 6.2 [R2014b]. Sim scape Version 3.12 [R2014b]. A Simulink Coder Version 8.7 [R2014b]. The Simulink Control Design Version 4.1 [R2014b]. Simulink Real-Time Version 6.1 [R2014b]. Spreadsheet Link EX Version 3.2.2 [R2014b]. State flow Version 8.4 [R2014b]. Statistics Toolbox Version 9.1 [R2014b]. Symbolic Math Toolbox Version 6.1 [R2014b]. System Identification Toolbox Version 9.1 [R2014b]. Wavelet Toolbox Version 4.14 [R2014b].
Method to Crack Matlab R2014b:
- You need to disconnect your computer \ laptop \ notebook \ netbook \ system from internet.
- That may be done either by simply turning of the internet device or you can disconnect from within the system, do as you like.
- Know you have to run the setup of Matlab R2014b and follow the steps mentioned below.
- You have to select the option use a file installation key and then you have to click on “NEXT”.
- Know click on yes and after clicking that again you have to click “NEXT”.
- After doing that you have to check the option that “I have the file installation key from my license option”.
- And know the matlab r2014b crack file. That you had downloaded earlier will come into work.
- There will be a serial key in that crack file, you have to just enter that key.
- When the installation process will ask you for license file then browse for “lincence.dat”.
- After completion of installation, copy the cracked .dll + .exe file into directory, overwrite existing (x64 for 64bit, x86 for 32bit).
(C: -> Program Files-> MATLABR2014b-> bin-> win64) for x64
(C: -> Program Files-> MATLABR2014b-> bin-> win32) for x86
- There are also video download links. So that it will also be helpful for users in the process for activating Matlab R2014b. And hence also in simulation process.
- After that start Matlab R2014b, it’s register, enjoy!!! 🙂
Matlab r2014b Crack + Setup Free Download
|
OPCFW_CODE
|
Anil posits that [Flickr’s] users, creators of value and “interestingness” are getting short changed, or at least in the future our understanding of Flickr’s value proposition will lead us to conclude their users are being short changed. It’s part of an ongoing struggle to define our norms around participation, community, hosted tools, and ownership. (On a side note, syndication can mix into this explosively, as with this thread last Summer on Meetup and EVDB)
Actually Anil’s point was more interesting and more subtle, and worth reading, but as the signal bounced around the echo chamber, it degraded into “Hey, I make Flickr interesting, pay me!”.
I mean as software tends towards commodification (as t approaches 0), clearly Flickr derives its value from its participants, yes?
No. Quite the opposite.
I could replicate Flickr’s software (call it Flickah, a Boston Flickr derivative), give it away free, and still people would pay to be part of Flickr. And in fact if I ever managed to grow the community to a fraction of Flickr’s size I’d be in trouble. Flickr isn’t a photo hosting site, it’s a salon, and unsurprisingly value accumulates most quickly to the salon owner. Value arises from the centralization.
Community Service Models?
So assuming software, what alternatives models exist for a community to host a service they find useful? How do communities gain and support the values of centralization without handing over control? A [Flickr], an Upcoming, or an Audioscrobbler provide value in direct proportion to the size of the community, while the centralization of a Google Maps (or a Geocoder) makes an expensive resource affordable. It’s a question I’ve been wrestling with for a while (community+service). And a question I asked at techdinner recently to surprising results.
I expected to hear about grid computing, alternate economic models, p2p, etc. Instead it was suggested that maintaining such a resource, or at least some subset of such community resources is the role of the Academy in the 21st century. (less surprising given the presence of Berkman-ites in the crowd)
Perhaps not a Google Maps, or Flickr but maybe Harvard should be hosting the definitive URI for books? I was intrigued. (not to mention a little appalled given my stint doing tech for Higher Ed.)
Last thought, in the multitude responses to Anil, it was pointed out that interestingness can be gamed, as can most deployed reputation systems. Yet eBay works? How? By making buy in into the system cost real cash, something Flickr print is poised to do. As a print service not terribly exciting, but what a great way to quantify interestingness.
|
OPCFW_CODE
|
How can scammer actually reply from a spoofed email address?
I (mostly) understand how a scammer can send an email from a spoofed account, all you need is an unsecured SMTP server.
But how is it possible, for a scammer to RESPOND and maintain an email conversation with the victim from the spoofed address? In this case, there was no "reply-to" and the domain is completely legitimate.
The only clue was that the mail address of the responder (scammer) was in some (not all) cases suffixed with a "1", i.e<EMAIL_ADDRESS>and<EMAIL_ADDRESS>My first thought was that the mail server at "legitdomain.com" was compromised, in which case pulling this off should be fairly simple since you can receive and respond to emails and create rules to redirect emails from target addresses so that the domain owner staff don't see them. You can also read incoming/outgoing emails to help with target selection, i.e. target a recently invoiced client that is about to make a payment and convince them that the banking details changed.
But is there a way to do this without having access to the mail server?
What makes you sure that the email is spoofed instead of the attacker controlling the used account? There is no need for this to control the mail server, it is sufficient to control the specific account.
I think you are using the term "spoofed" incorrectly. "Spoofing" is a specific technical term. If you can determine what exactly happened with the account, you might find that the answer is obvious<EMAIL_ADDRESS>is not necessarily "spoofed".
spoofing may not be the right term. if you mean they send by setting another email as a "from" address, you dont have to be a hacker for this. But this is too primitive & will be filtered/rejected/marked by the receiving server.
However, possible they gained access to DNS, mail server, or they leverage wrong DNS records setup, using internationalized domain name (using intern characters that look like english ones), using lookalike domains (eg number 1 instead of letter l), using server IPs that was once allowed in spf etc but smb else got it now etc. These are not really spoofing though
There are two ways that this might be "spoofed":
The first way is that the adversary actually has control of the mailserver or a compromised account. It's not uncommon for adversaries to try and hijack a legit email address as a means to send phishing attacks. During my time as a SOC analyst (I worked for a large medical university) we would often get phishing emails coming from real .edu addresses. The adversary would initally compromise some random account from another university, and then send phishing messages from those accounts as a means to add credibility to their asks. Part of my job was to call these other schools and tell them that they had a compromised account.
The second option could be that they registered a domain that looks just like the real one. A trivial example would be something like m1crosoft.com. Some of these are so close and well crafted that it takes a trained eye to catch them. It's even harder if it's not a well-known brand in which case you may not know what the actual URL is--so you wouldn't know that "mycompnay.com" is actually a 'spoofed' version of my-comapny.com
Thanks Tobin, given your response as well as the comments to the main post, it seems there is no glaringly obvious way for a scammer to "spoof" or "impersonate" an email address for both sending AND receiving of emails (without using reply-to). This leaves us with either a compromised server, compromised email account, or compromised individual. It's also not a case of a look-a-like domain. My reason for coming to this forum was to learn if there are other more advanced methods that I'm unaware of, manipulating MX records or something of that nature.
It takes a knowledgeable inspection of the email headers to determine where an email actually originated. Anyone can easily send an email that reads as "From<EMAIL_ADDRESS>and directs you to reply to some other address.
|
STACK_EXCHANGE
|
Odio further states
If you send us a 2048px image we don’t re-compress it (preserving the original quality).
An example of our commitment to quality is in the choice of our default resolution size. A 720px JPG image can be rotated without re-compressing it (while a 700px image can not).
Improving photo quality through better image compression and resampling is a process that will never end. We have made progress on this front (reducing compression levels, switching to lanczos, increasing resolution 2048 px, etc) but we’re not stopping there. We will continue to focus on photo quality and we welcome your constructive suggestions.
We’re proud of how far we have come so far. No other photo sharing company provides Facebook’s photo quality, at our scale, free of charge.
The Facebook photo infrastructure is incredibly complex. We resample images in different ways depending on where the photo is uploaded (and its size). However we don’t use PHP’s GD image resampling library. Server-side we currently use Graphicsmagik (a port from Imagemagik) with the Lanczos resampling algorithm. We use a custom-built resampling library client-side.
If you’d prefer to be in control of the resampling process then you can always do it yourself. As noted above, we work to not resample images that are the correct resolution. Just send us a 2048px (or 720px) image. .
But my testing bares a different story. It indicates all images are recompressed. I uploaded these three pictures at 700, 720, and 2048 pixels. The original sizes were 83, 86, and 1,005 KB. The resulting images on FB was 90, 93, and 679 KB. I chose ‘high resolution on upload’.
An email requesting clarifications on this discrepancy was not returned.
In the coming weeks, Facebook will also roll out a new Flash uploader which will resize the image to 2048 pixels on the client side before upload. Will this new uploader strip up exif and IPTC data as well?
Note that Facebook does not actually give you the option of displaying the images in high resolution. Regardless of the upload size, all images are displayed at 720 px. For images higher than 720 pixels, there is an additional link to Download in High Resolution. This can be problematic for copyright owners as it encourages the copying of images off the site. Note that the mere copying of images from Facebook into a local hard drive is perfectly legal. It’s the redistribution of that downloaded images that can cause problems for anyone who decides to then upload that image. If the point of allowing high resolution images is to permit higher view quality, why not just offer an option to display it in high resolution? Why encourage the downloading of copyrighted images?
|
OPCFW_CODE
|
Novel–Dual Cultivation–Dual Cultivation
Chapter 894 – Sacred Lands snore giraffe
The Mating Of The Moons
“Must we use the Spirit Credibility Scroll again?”
Su Yang closed up his sight to consider to get a min.
‘Although it’ll be troublesome working to get within, this sect designed by our family is obviously among the most secure destinations within the Four Divine Heavens currently. Should I let them handle Su Liqing as well as the some others, I will be able to traverse the Four Divine Heavens in relief.’ Su Yang considered to him self, intending on leaving individuals in the Spatial Equipment at the sect put together by his family.
“Need to we take advantage of the Soul Applicability Scroll once again?”
“The Sect Expert from the Lonesome Fairies’ Processed Palace, Luo Ziyi,” he calmly claimed.
“Luo Ziyi… Where is it sect located?” Su Yang then required.
“Well… When Su Yang was still an unaware small man, he’d almost been tricked by Li Menghua. Because of that, he’d squandered at the very least 100 years mastering people just to guarantee he never helps to make the identical error all over again.” Su Yang discovered a different one of his secrets that they swore he’d never explain to any person.
“Should we makes use of the Soul Applicability Scroll all over again?”
“Make sure you duplicate each individual message without missing out on any, especially the three months piece. That’s the main.” Su Yang then claimed.
“I will tell whether one’s resorting to lies or perhaps not even with no Heart and soul Validity Scroll.”
Mu Yuechan’s jaw bone lowered to your floorboards after listening to this.
“The Sect Expert in the Lonesome Fairies’ Highly refined Palace, Luo Ziyi,” he calmly said.
“You prefer me to do a little something in your case? I present you with an inch and you need to go on a mile, huh? Just in case you neglected, I am only letting you know with regards to the Su Family members because of the data you offered me. It’s not like we’re buddies or a single thing. If you desire me to do one thing in your case, you’ll have to pay up.” Mu Yuechan believed to him.
“There’s no demand,” she shook her travel.
“Their sect is also named the Depressed Fairies’ Sophisticated Palace, as well as their Sect Learn is Luo Ziyi.”
“Really, this is extremely problematic…” Su Yang explained.
“Because they are located within an area where males cannot move ft . within or they’ll be killed immediately without having opportunity to clarify by themselves.”
Su Yang converted to view her that has a serious teeth on his confront.
“I will tell whether one’s being untruthful or otherwise even without the Soul Applicability Browse.”
‘Although it’ll be problematic working to get interior, this sect created by my loved ones is undoubtedly one of several most dependable destinations in the Four Divine Heavens at the moment. When I permit them to handle Su Liqing plus the others, I will be able to traverse the Four Divine Heavens in alleviation.’ Su Yang thought to themself, thinking of leaving behind those invoved with the Spatial Gadget with the sect designed by his household.
“Ensure you perform repeatedly almost every phrase without missing out on any, particularly the 3 months component. That’s the most crucial.” Su Yang then claimed.
Listening to this, Su Yang lifted his eyebrows and stated, “You’re showing me they’re situated in the Sacred Lands?”
“When you aid me, I will confirm about that once Su Yang almost mistook a…” Su Yang stopped his phrase midway and smiled.
“Don’t tell me you’re aiming to go there? I wouldn’t accomplish this if I have been you.”
“Wait! Relax downwards! I never rejected that may help you!” Mu Yuechan stated which has a frown in her experience.
“Hold out! Sit back straight down! I never declined to assist you to!” Mu Yuechan stated having a frown on her encounter.
“What? One has substantially more tricks about Su Yang?” Mu Yuechan stared at him with large eyeballs, soundlessly curious about to herself where in heaven’s title performed he attain these types of information.
Mu Yuechan nodded and explained, “There’s not much to convey about the subject. The sect was made about 1,000 years ago, and they are increasing steadily consequently. The truth is, they’re already strong enough to rival even most of the best sects within the Four Divine Heavens. The sole thing blocking their advancement is the fact they’re only agreeing to lady disciples.”
“I heard it excessive and apparent,” she said.
“Their sect is usually referred to as Alone Fairies’ Highly refined Palace, together with their Sect Become an expert in is Luo Ziyi.”
Novel–Dual Cultivation–Dual Cultivation
|
OPCFW_CODE
|
You must not call setTag() on a view Glide is targeting
Hello,
I'm using the latest version (4.0.14) of fast-image with React-Native 0.53.0.
Today, I was doing some testing on several android devices (Samsung S7 Edge, Nexus 6P, etc) and I'm getting this exception logged in our sentry instance.
You can see the entire stack trace here: https://sentry.keenvil.com/share/issue/a1a259ad75a44809a33943c503bf4229/
Did something change in the android implementation that I should be aware of and are there any code changes I need to make?
Searching through Glide issues I found these ones: 1531 and 370 but they both are for Glide 3.7.0 and Fast-Image is on 4.7.1 if I'm not wrong.
Any idea how this can be fixed?
Thanks!
@fmonsalvo The Glide 4 code hasn't been published to npm yet. It will be in the next major update.
Not sure what could be causing that error. Apparently though you must not call "setTag" on a view glide is targeting, are you possibly doing that somehow? Might be relevant: https://github.com/bumptech/glide/issues/1531
So do I.
I used in Flatlist, and when I slided the list over and over again, quickly I got this exception.
@fmonsalvo could u share the solution? thanks.
Hey @DylanVann this is still happening on latest version (5.0.11). Any chance this will get fixed soon?
Thanks.
Guys, on my app, whenever I press the back button on Android to close the app or try to log out, this error appears on the android studio logcat. I went and looked up some of the files reported, and the issue for my case seems to be from the FastImageViewManager.java. Below is a snippet of the function reported to be causing the error.
The "requestManager.clear(view)" seems to be causing the issue for my app. I tried the brute force approach to see if this was really affecting my app, basically removing the requestManager.clear(view), and the app no longer crashes whenever I press the back button on the home screen or log out.
I don't completely understand the entire library myself or glide since I've only looked at small parts here and there, but this is what I speculate: this function seems to be getting rid of the images, deallocating memory, while glide is still holding onto that view and its images, trying to keep the memory there. With the deallocation, glide is trying to allocate more memory to fill up the lost memories, so the two actions conflict and cause the app to crash.
Not sure if my speculation is near the ballpark or not at all. Can I get someone to tell me the logic behind glide getting rid of the cached images?
Thanks.
@PangGua00 you are right! Removing testID on the FastImage component worked for me as well.
But remember to also add the proguard-rules for android.
This issue can be closed imo. Solution was: upgrading to the latest version (v6), adding proguard rules for android & removing testID for any FastImage component 🎉
I'm had to upgrade everything to react native >0.6 and got everything working for iOS, but haven't figured out android yet. I am getting an error- error while updating propety 'source' of a view managed by 'FastImageView'... null ... You must not call setTag() on a view Glide is targeting
I tried to add the proguard-rules in the READ.me. I wasn't sure what testID people were talking about, but tried commenting out/ removing the following (from picture below).
Any other suggestions would be great.
"react-native": "0.60.5",
"react-native-fast-image": "^7.0.2"
We were receiving this crash on Android release builds because we were passing testID in FastImage component.
Does FastImage not support the testID prop? If not then how do we support jest testing using the FastImage component?
@beisert1 are you passing testID anywhere else. Still haven't been able to get around this and have been just disabling for android and only using in IOS.
I had this issue with using react-native-elements Avatar component. Seems like it adds the testID under the hood.
Currently just using FastImage for Avatars on iOS:
<Avatar
ImageComponent={Platform.OS === 'ios' ? FastImage : undefined}
source={{
uri: picture
}}
size={size}
{...rest}
/>
@okarlsson you can create a custom Image component like this which will work for android and iOS and then pass this component in ImageComponent prop of Avatar.
import React from 'react';
import FastImage from 'react-native-fast-image';
export const Image = React.forwardRef((props, ref) => {
const { testID, ...otherProps } = props;
return <FastImage ref={ref} {...otherProps} />;
});
Image.propTypes = FastImage.propTypes;
Image.priority = FastImage.priority;
Image.resizeMode = FastImage.resizeMode;
Image.cacheControl = FastImage.cacheControl;
Image.preload = FastImage.preload;
export default Image;
|
GITHUB_ARCHIVE
|
Some big news just dropped in my inbox, as Office Live has been renamed, to Office Live Small Business, and a new offering, Office Live Workspaces has just been announced.
First of all, nothing about what was up until now Office Live has changed except the name. Why the name change? According to the email:
Microsoft today laid out the next phase in its strategy for online services, offering a roadmap for new offerings from the company’s Business Division that synthesize client, server and services software.
These hybrid offerings will combine elements of client-based programs with software that runs large servers and new services delivered over the Web. Microsoft plans to deliver these services over the coming months under two key service offerings: Microsoft Live and Microsoft Online. These offerings will span from hosted services, by Microsoft or by a Microsoft partner, to on-premise offerings, delivering software the business customers want, however they want.
Oh boy another “Live” name from Microsoft to clear the air. But this actually makes some sense, as it pulls together the more large and medium business offerings with these new small office products, as well as products such as Small Business Accounting. To recap, Microsoft Online currently offers Microsoft Dynamics CRM Live, as well as:
· Microsoft Exchange Online
· Microsoft Office SharePoint Online
· Microsoft Office Communications Online
· Microsoft Office Live Meeting
· Microsoft Exchange Hosted Services
· Microsoft BizTalk Services
So what is Office Live Workspace? The beta, which is accepting applications now at www.officelive.com will begin soon, and expects to be a free service, although perhaps with some advertising.
Access documents when away from your desk
- Store documents and access them from any computer
- Stay productive while at home, an Internet café, library, airport, etc.
Share documents with others
- Gather feedback on a document, report, or presentation
- Share with people who can’t access your corporate network
Prepare for a meeting
- Share the agenda, minutes, and action items
- Post meeting handouts or presentations
Organize a study group
- Work together on assignments and share notes from class
- Keep a shared schedule and task list for your group
Keep track of important school information
- Manage schedules from sports to registration deadlines
- Track your GPA and progress toward degree requirements
Coordinate with club or team members
- Post and manage schedules (for sports, clubs, etc.)
- Share lists of who’s bringing what (no more e-mail back and forth)
Organize an event
- Use for a party, camping trip, even a wedding
- Share to-do lists, timelines, budgets, directions
Store your information and keep track of favorite things
- Store and access important passwords, frequent flyer numbers, etc.
- Create Top 10 lists of favorite films, restaurants, books, etc.—and keep them private or share with friends and family
Prepare for a trip
- Plan for the trip with travel budget and packing list templates
- Share your itinerary, contact info, and important documents with colleagues or family
While there seems to be some overlap between this product and the more home user/consumer oriented offerings such as SkyDrive and Windows Live Events, we don’t know enough about either of these offerings to make comparisons or know where they overlap.
From the FAQ, some more interesting tidbits:
- It’s free. No purchase or credit card information is required. Microsoft Office Live Workspace may eventually include advertising, but we’re still testing different designs.
- Anyone who uses Microsoft Office can benefit from this service.
- Your files have the benefit of virus protection from Microsoft Forefront Client Security technology, and they can only be accessed with a Windows Live ID and password. You control who can view, comment on, and edit your documents. You manage permissions and decide whether someone has access to a single document or an entire workspace.
- For a better experience with Word, Excel and PowerPoint, we will make an Office Live Add-in available for the Microsoft Office suite. If you want to do real-time screen sharing, you can download the Microsoft SharedView beta here.
- You can upload many file types, from Microsoft Office documents to pictures and PDFs. In fact, you can store over 1,000 Office documents in your workspace, based on the average file size and use of Word, Excel, and PowerPoint by students, work, and home users. For your protection, we don’t allow the uploading of files that could cause security issues such as .exe files.
- Currently, it’s available in English only and optimized for use in the United States. We plan to include additional languages in the future.
All in all some very interesting news from Office Live, which has suddenly thrust itself back in the forefront of the Software + Services space.
|
OPCFW_CODE
|