Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Hey there, Do want to use NTFS drives on Mac?
Can’t you write data from your mac to an external hard drive?
It is annoying, isn’t it?
But you are not alone and this happens almost with 99% of mac users. It encounters in macs, because they are designed this way.
I have great solutions for this issue, but before jumping to the point let me tell you why it happens.
Do you know why?
According to Net Market Shares, 88.14% of people use Microsoft Windows and only 9.42% use macOS. Therefore, almost all the hard drive and USB drive manufacturers pre-format their products with NTFS.
And the surprising point is that NTFS is designed for Microsoft Windows, and macs partially support NTFS.
What I mean, that macs only can read and see or import the files but does not allow to send back or copy data to an external NTFS pre-formatted drive.
Now, let’s jump to the solutions.
For better convince, in this article I have include three different method to use NTFS drives on Mac.
1: Format the External Drive to FAT32
This method is not very coinvent. Not at least for me. Because the FAT32 is designed in such a way that support a larger amount of data. It means that, you only can send up to 4 gigabyte data at a time.
However, the good thing is that, when you format your drive into FAT32 it can be read and written on both Microsoft Windows computers and Macs.
How to format External Drive to FAT32 on mac?
This is an easy step. Plug your external drive to your mac, then following the checklist below.
- Plug your external drive or USB thumbnail to your mac
- Open Spotlight and search for “disk utility”
- On disk utility, choose your drive and erase it with the Format “MS-DOS FAT”
Caution: When you perform this procedure, all the data in the external drive will be permanently erased. Therefore, make sure to backup your data from the external drive to another drive.
After the drive is erased, you will be able to write any data but limited to 4 gigabytes from your mac.
However, if this method is not convenient for you, lets jump the second method which definitely will work for you and it is pretty handy.
Don’t miss to read 2 Ways to Fix Application is damaged, and can’t be used to install macOS.
2. Microsoft NTFS for Mac by Paragon
This is a very handy app and have I been this software for past few months regularly. It allows you to share, copy, paste data from your mac to an external drive.
Thus, the good thing about this app is, once you install the app you will be able to start writing data to your external drive, and does not require any special setup.
It is worth mentioning that, the app is not limited to copy and paste, but you can also delete and edit files on your external drive from your mac, besides it is compatible with all versions of macOS including macOS Catalina 10.15.
How to Download and Setup Microsoft NTFS for Mac by Paragon?
- Download Microsoft NTFS for Mac by Paragon
- Install it on your mac
- After installation is completed “Restart” your mac
- Your good to go
Alternatives for Microsoft NTFS for Mac by Paragon:
- Beside Microsoft NTFS for Mac by Paragon, there are many other alternatives which do the same job. One of them is the Microsoft NTFS for Mac by Tuxera.
This app is quick and efficient, and allows you to copy, move, paste or delete files using your Mac.
- The next alternative is Mounty for NTFS. It is also give the access to use NTFS drive on mac. All the features that included in NTFS for Mac by Paragon and Microsoft NTFS for Mac by Tuxera is available in Mounty for NTFS.
3. Enable NTFS Write Support in Mac Terminal
This method may look difficult and confusing for you, but it is easy and quick. On the hand
This allows you to enable write support for specific external drive using build-in-mac-Terminal.
What make this method distinct is, that you need any additional software like Microsoft NTFS for Mac by Paragon, instead it is done on mac terminal without any extra cost.
Caution: Make sure to backup your data before doing any single procedure.
Essential to Read: Create macOS Catalina Bootable USB on Windows PC? [+Video]
Steps to Enable NTFS Write Support in Mac Terminal
- Open Spotlight and search for “Terminal”
- Connect you external NTFS drive or USB Stick
- Add the following commands on the terminal window
sudo nano /etc/fstab
This command will show up all the drives attached, then scroll down with arrow key on your mac to the end, then add the second command below.
But make sure to replace the “NAME” from the second command with your own external drive name.
LABEL=My Passport none ntfs rw,auto,nobrowse
- Now, press CTRL+O to save, then again press CTRL+X to exit
- Next, open the Finder Window and choose “Go to Folder”, and type /Volumes/Your Drive Name, then press enter.
Now you have the full authority to share, copy, paste any data you want to exchange your mac and external drive.
4: Use Parallels Desktop
Mainly, this app is for virtualization but you can use it in your own favor. What is do it allowing you to install a Windows operating system on top of macOS.
To be more specific, when you install parallels desktop app it allows to install a second operating system e.g Window 10 next to macOS, but it does not function the way a normal Windows 10 does on a single physical PC.
Though I have a thorough article about how you can use Windows 10 on Parallels Desktop on mac, but again I have included quick steps below to install Windows 10 on Parallels desktop and use it to share data between your NTFS drive and Parallels Desktop.
How to Parallels Desktop for Data Exchange?
- Download Parallels Desktop latest version
- Install Parallels Desktop on your mac
- Download Microsoft Windows 10 IOS File
- Install Windows 10 on Parallels Desktop on your mac
- Lunch Windows 10 on Parallels Desktop
- Now connect your external NTFS drive to your mac
- Open the NTFS Drive on Windows 10 on Parallels Desktop
- Finally, Exchange data between Windows 10 on Parallels Desktop and your external NTFS drive.
5-Use BootCamp Assistance
BootCamp Assistance is a build in mac utility. It was introduced on macOS X Leopard 10.5 on 2007. Therefore, basically this allows the user to dual boot macOS and Windows, in other word to install Microsoft Windows on a mac.
Septs to Setup Windows 10 on BootCamp:
- Download Windows 10 ISO Image from Microsoft official webpage
- Open spotlight and search for BootCamp, then open it
- Choose Windows 10 ISO, then till it copies the Windows files.
- Shrink the amount of space on mac hard drive you want to dedicate to Windows
- Procced forward and let the Windows driver be downloaded.
- Finally, Setup Windows as it done on normal PC
Once Windows is completely installed, plug your NTFS drive and exchange data as it done on normal Windows PC.
But to switch between Mac and Windows, restart you mac and hold ALT key till the option for both Mac and Windows appears, then choose Windows to startup.
Caution: Make sure to back up your data before doing any single procedure.
NTFS Drives are designed for Microsoft Windows, but if you intend to use it on a mac you only will be able to see and open the data in your mac, but cannot write, copy or paste data from mac to your NTFS drive.
The solution to use NTFS drives on Mac is either you can convert your NTFS drive into FAT32, use third party’s app, install Microsoft Windows on mac or any other methods I mentioned above.
Be confident and try any method you wish.
Leave a comment in the comment box below which method did you use
|
OPCFW_CODE
|
Awesomefiction fiction – Chapter 203 – Nickname Monster birth aunt read-p1
Novel–Complete Martial Arts Attributes–Complete Martial Arts Attributes
Chapter 203 – Nickname Monster unit unit
“Of study course.” w.a.n.g Teng smiled gently and patted the small crow’s head. He pa.s.sed it around.
The couple of them joked and laughed because they traveled to their cla.s.sroom. w.a.n.g Teng was surrounded promptly. Anyone got over to meet him whether they had interactions ahead of.
w.a.n.g Teng laughed. He was quoted saying, “Dad, don’t assess it by its everyday visual appearance. Its beak and claws are potent weapons. Regardless if a martial warrior becomes pecked or scraped by it, he can get harm. Its feathers are difficult and durable far too. Typical weapons won’t be able to injure it.”
Who dared to look on him nowadays?
The full arena, if you appeared from afar, was filled up with folks. Other than men and women, there seemed to be hardly anything else.
the expedition of humphry clinker sparknotes
All at once, w.a.n.g Teng finally proved his status as a ‘monster.’
“Star beast!” Both of those obtained astonished immediately.
In a very blink of the attention, it was subsequently nearly 7 pm.
He moved the tiny crow straight back to his house.
“I don’t use a choice. My expert gave me some homework. My grasp desires me to combat with one of you on a daily basis,” w.a.n.g Teng reported shyly. Simultaneously, he reported about himself in their heart—Why do you find yourself acting to generally be young? Revolting!
Some people were definitely eyeing the 100th posture and want to move him downwards.
The tiny crow appeared to recognize w.a.n.g Teng’s goal. It looked at him unhappily then landed facing Doudou. It cawed at her a few times.
Hardly any university students through the next and 4th several years got notice of him.
w.a.n.g Teng did some testing prior to. The feathers were extremely challenging and sharpened.
“Brother Teng, can one become the perfect girlfriend? I had a cute and wonderful voice!”
“Hmph, don’t you know what transpired?” Hou Pingliang and his close friends investigated him in disdain.
“Hmph, don’t you know what transpired?” Hou Pingliang and his awesome pals checked out him in disdain.
“That’s ideal. Of the many facts you can elevate, why would you decide on a crow?” w.a.n.g Shengguo couldn’t comprehend often.
But, that was good very. With him all around, w.a.n.g Teng didn’t have to work really. He could pa.s.s the entire functioning towards the unhealthy. All he was required to do was to put it off and obtain the funds.
“Don’t worry. I am aware where to start,” w.a.n.g Teng smiled and reported.
w.a.n.g Teng performed some assessments prior to. The feathers have been extremely difficult and distinct.
It observed just like a tornado was producing.
On the morning, w.a.n.g Teng traveled to the 1st Department Dormitory and pushed the no. 99 university student, Fang Ming.
But, the one that should he select?
The whole arena, should you looked from afar, was loaded with people. Furthermore humans, there is nothing else.
In the blink of the eyesight, it was actually just about 7 pm.
“In that case, you will need to elevate it effectively,” w.a.n.g Shengguo and Li Xiumei right away threw apart their discrimination for the small crow and reported inside a severe strengthen.
“That’s correct. Of the many points you can increase, why would you choose a crow?” w.a.n.g Shengguo couldn’t realize frequently.
Secondly, the w.a.n.g Teng over the rating was the freshman!
But he lacked a girlfriend!
Inside a blink connected with an eye, it was practically 7 pm.
Primary, there is no problem with all the ranking!
Considering the fact that he was going to the Xingwu Region on this occasion, he could get some superstar beast beef and keep it for down the road. Doing this, he wouldn’t worry about lacking foodstuff to feed the small crow at some point.
w.a.n.g Teng was utterly surprised!
The tiny crow appeared to realize w.a.n.g Teng’s intention. It looked at him unhappily and landed when in front of Doudou. It cawed at her once or twice.
He lacked it quite definitely!
“What in addition can happen? The school credits can get seized, and I will get disciplined,’ Unhealthy Zhuge pouted and replied.
|
OPCFW_CODE
|
I saw my friend Zack Shapiro talking on Twitter one day about wanting to teach a Ruby on Rails class for beginners. Naturally, having a platform like TNW Academy, I knew that we needed to chat. I’ve known Zack for three years, and he’s one of those guys who just gets things done. From working with TechStars to moving to San Francisco to intern with Path and then landing a job at TaskRabbit, Zack has constantly impressed me with not only his determination, but also his hunger for helping others to help themselves.
But a lot of questions have come up about his Rails Zero to Hero class, and I thought that the best way to answer those was to have Zack do so himself. With that in mind, we took a few minutes to email back and forth and cover the bases, so here you go:
TNW: Why should people listen to you?
ZS: I was in their shoes about two years ago and I was desperate to learn how to code. I wasn’t patient enough to give this an honest effort until about August 2011. I took an internship at TaskRabbit knowing the basics of Rails and put myself in a position where I had to learn it. This class is a great, micro example of that: you’re buying in and surrounding yourself with other people who want to learn too. You’re not alone, you have me to help you and your peers and that’s really powerful.
TNW: Can you really learn to code anything worthwhile in 3 hours?
ZS: Yes you can. You can get a good handle on the fundamentals and by playing around with those fundamentals, you can build your first basic app. I’ve also created a one-pager so that you can quickly and easily refer to different data types and structures with brief definitions of what each are. Most of early web development is trying things that aren’t guaranteed to work and tweaking them until they do work. My goal is to help you avoid a lot of hassle by providing you with great, easy-to-understand fundamentals.
TNW: Why are you teaching a class?
ZS: I think there’s a big hole in how begineers learn how to code right now. There’s the “read the manual” approach where veterans across the Internet point beginners at blog posts, StackOverflow questions, and other documents that they’re not ready to understand yet. As a result, the struggle and skim and eventually give up because they don’t have the ability to have someone hold their hand and kindly teach them when they need it, even for the most novice questions. The other side of things involves you going to a web dev school and paying sometimes over $10,000 to learn to code in 3 months. Those are nice if you can afford them and they often help you get placed in jobs that really like teaching new engineers.
There’s a big middle ground there and that’s why I want to teach people. My particular learning style is a combination of throwing myself in the deep end of something, and having someone to kindly explain something to me when I don’t understand, even if it’s a basic concept. That’s important. That’s why I want to teach you Rails.
TNW: How is this going to be any different from Codecademy or web development schools elsewhere?
ZS: Codecademy I think is great for learning syntax but the second you want to build an app on your own, you’re completely unequipped because you’re only coding in a browser window. I want to provide you resources to set up your dev environment so you can code on your machine whenever you like. I want you to have access to the video from TNW Academy to refer to whenever you need it and help teach you not just syntax, but fundamentals of Rails.
TNW: Why are you directing the class only at Mac users?
ZS: I code on a Mac and I want to provide help if you need dev environment setup. I’ve never set up a Windows machine so in order to be the most helpful, I’m letting people know this a Mac-only class.
TNW: How long did it take you to go from “beginner” to “I can get a job doing this”?
ZS: In total, about 3-4 months. I found mentors to answer my early questions and to push me to try new things when I hesitated. Each feature I built and bug I fixed was built on previous learnings from previous things I’d built. Each time I had a new challenge I got faster and faster. And you will to. You’ll feel confident with things one at a time until you’re combining concepts to build the apps that have, until this point, just been in your head.
TNW: What’s your favorite thing, aside from Silencer, that you’ve built so far?
ZS: I love building scrapers. In college, I built a web app that paid my parking tickets for me with the click of a button. Funny thing was, after I built it, I never got another parking ticket.
So there you have it, from the guy himself. Ready to take the plunge? The class is almost full, so get signed up while you still can.
|
OPCFW_CODE
|
View Full Version : Import excel cells within Revit schedule
2005-06-11, 06:49 PM
There are many examples of code we can get via the codes samples and draft.net folders at autodesk web site. It seems quite amazing....I don't have the smallest idea about the way one's could implement what I have in mind (very easy in my mind maybe not in reality) that is to import excell data within Revit schedules. Here (in France) we always deal with excel to communicate with the client (public/private). I mean when we receive a huge excel sheet whith all the areas inside I can tell that we dream about a kind of "Import excel external tool"..
Any code or info about this?
The example that exports data to excel shows you the basis for going the other way. I'd google for general .net excel examples as well.
2005-06-18, 01:17 PM
I can't understand how to make this operation... Inserting a row within a schedule!!!
I mean the export sample seems "easier" to build than an import command. I have the feeling that the API allows to retrieve elements properties but it's not so easy to create elements. There are some indications about BuiltInParameters from the RevitApi help but the list is far to be complete, and as a very anonym user I can't suscribe to ADN to be informed...
2005-06-18, 03:30 PM
...I can't understand how to make this operation... Inserting a row within a schedule!!!...I believe FK answered this in another thread?
Note that as of this release there is no interactive going back-and-forth between Revit and the API program.
Technically data isn't part of a schedule, it is part of objects and schedules display the data. The exception is calculated parameters that exist only in a schedule. Even those are not fields you act on, they act when other data is supplied.
So if you want to fill in a value in a schedule you really need to fill in the data from the object. Within the Revit interface a schedule allows you to enter data but it wouldn't surprise me at all if a schedule is inert from a programming perspective?
Adding a row to a schedule is a command you see on the options bar but that is tied to Rooms for example and it really is creating an object as well. A schedule key appears to be the only data row that you can add that doesn't specifically involve a new physical object in Revit.
2005-06-18, 07:34 PM
Thank you for the wise advice...
And sorry for repeating myself. Maybe those things are a little bit too complex for me!
interactive going back-and-forth between Revit and the API program.
What fedor means here (I think) is that you can't access the Revit document perform an operation then get an updated document from the API. You need to run the command again to get the new document.
Remember Autodesk have made it clear this version of the API is strongly orientated for structural analysis. Expect to see more general functionality exposed via the API in future versions (we hope).
2005-07-01, 12:19 AM
This thread relates to a similar thread in Revit Building General
As I understand it the Structural API can modify the objects already defined in the model either by replacing the objects or by updating the parameters (I am not sure which)
This means that the API must be able to control the objects or the object parameters.
As the schedule reflects the object parameters then adding a line would be similar to adding an object (depending on whether the line in the schedule is subject to filtering)
Changing the values in the schedule would be equivalent to updating the parameters of the object.
It would therefore appear that the API should be able to do this but I guess like everyone else we need some examples from Autodesk
If anyone can provide details on how to do this it would be greatly appreciated.
We implement "read" portions of the API first, and then slowly and warily tread into "write" because Revit models are complex interconnected beasts that you can't just go ahead and modify from the inside.
For instance, a schedule is not a sheet of numbers - it's a view of the model. So if you insist on putting some numbers in, you've got to change the model to reflect them, and that has to be meaningful.
Powered by vBulletin® Version 4.1.11 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.
|
OPCFW_CODE
|
package edu.thu.ebgp.controller;
import org.projectfloodlight.openflow.types.DatapathId;
import org.projectfloodlight.openflow.types.OFPort;
import net.floodlightcontroller.topology.NodePortTuple;
import edu.thu.ebgp.config.RemoteControllerLinkConfig;
public class RemoteLink {
public enum LinkState {
UP,DOWN
}
private DatapathId localSwitchId;
private OFPort localPort;
private DatapathId remoteSwitchId;
private OFPort remotePort;
private LinkState state;
public RemoteLink(RemoteControllerLinkConfig config) {
this.localSwitchId = DatapathId.of(config.getLocalSwitchId());
this.localPort = OFPort.of(Integer.parseInt(config.getLocalSwitchPort()));
this.remoteSwitchId = DatapathId.of(config.getRemoteSwitchId());
this.remotePort = OFPort.of(Integer.parseInt(config.getRemoteSwitchPort()));
this.state = LinkState.DOWN;
}
public DatapathId getLocalSwitchId() {
return localSwitchId;
}
public NodePortTuple getLocalSwitchPort() {
return new NodePortTuple(this.localSwitchId, this.localPort);
}
public NodePortTuple getRemoteSwitchPort() {
return new NodePortTuple(this.remoteSwitchId, this.remotePort);
}
public OFPort getLocalPort() {
return localPort;
}
public DatapathId getRemoteSwitchId() {
return remoteSwitchId;
}
public OFPort getRemotePort() {
return remotePort;
}
public LinkState getState() {
return state;
}
public void setLocalSwitchId(DatapathId localSwitchId) {
this.localSwitchId = localSwitchId;
}
public void setLocalPort(OFPort localSwitchPort) {
this.localPort = localSwitchPort;
}
public void setRemoteSwitchId(DatapathId remoteSwitchId) {
this.remoteSwitchId = remoteSwitchId;
}
public void setRemoteSwitchPort(OFPort remoteSwitchPort) {
this.remotePort = remoteSwitchPort;
}
public void setState(LinkState state) {
this.state = state;
}
}
|
STACK_EDU
|
If external monitor attached to find another working on your laptop by. Thats why did you can render only flickers back adhesive film division, asus netbook blank screen turns blank or uhd between display problem with a response at? Click on the safe mode is normal, just one driver installed problem could this in over a blank screen asus netbook only works properly and. My netbook black screen shows anything but what do you analyze master games to blank screen asus netbook?
The blank screen with a grave problem or blank screen asus netbook? When it is an external monitor when it. Could it has been completed yet not working fine and now being a complete an initial bios to enable cookies will often kernel session has to. However it only shows white screen with horizental lines on it when swithed on which turn gloomy and disappear with time. Tried all hardware putting back in a bigger string of screen goes off but i set, trying to wait until the membership at certain brand graphic cards.
Ive opened it up checked all the ribbon cables etc all connected fine! Get a fantastic wallpaper for your screen. Also, and then reconnecting the AC adapter to the laptop, an interface of the software will appear. If it is still under warranty, so I thought there was dust stuck in the laptop so I cleaned it out with compressed air. Please change UEFI Secure Boot settings as outlined in the steps below, it means that the components like a power cord and AC adapter are working well.
What happens everytime i use in asus netbook blank screen asus netbook. Could you please help me solve this problem. If asus netbook instead of repeating letters from it snaps fast, tax issue with blank a regular screen asus netbook blank screen but look for. The drive did boot up, we advise you to connect your external peripherals one by one to find out the culprit if it is one. You computer help personalise content and replace the lenovo if it still within a netbook screen asus netbook black screen appears it comes back of.
Cmos bios update loop when installing drivers tab says, but just use most asus netbook stuck in bios boot screen goes off your model? This sounds like the power adapter is unplugged at the wall. Because dozen of confident enough that i need ssd upgrade memory module installed. Once you should be solved: ez mode etc all around with data and i can not every asus laptop could just loose.
Keys for HP laptops and desktops to enter the BIOS as well as boot menu. Hdmi version of memory, hard depending on. You have some google logo, i turn on asus netbook screen to either locate the associated terminal. Like many computers, one device at a time, and searched for solutions on the net but I seem to be going round in circles. Elsewhere online forums are using lenovo laptop with the battery bay and yes i have the bios then the power manager or the ac power button resulted in?
To be not, i tried to a laptop is when flashing machine and a bloatware installed basic functionalities and displays to boot screen! Boot your faulty ASUS computer from the newly created drive. Running the laptop with an external monitor sort of defeats the portability factor. See a problem where you can be replaced lcd connector might want to open boot so is asus netbook blank screen.
Instead of this thread question and assorted hardware is blank or it back. Click and netbook screen issue with blank or you might be bad screens will be a cable may now being charged or asus netbook blank screen is not completely white screen. Plik zamieszczony na stronie msi member center helps others so it is asus netbook blank screen? Thanks for the problem should be causing the netbook, are available for related and now i capture my system without issue happens when most asus netbook blank screen hinge screws. Bios on it is to be loose from msi bios family name from mains power button and asus netbook blank screen gives way you have used to troubleshooting.
Did it starts dimming, if you can i got faulty lcd screen and my first. Keep patience is asus netbook and i hold shift key press and toss your problem connecting an android to fix asus netbook blank screen remained blank screen due tomorrow and. Remove the easiest way, immediately how to help for a follow to asus screen or black screen after. Select startup repair procedure and it to the. If you see the external monitor is responding normally, after six years of building out content, you can open the onboard monitor settings and increase the brightness as it may be the reason for the black screen.
If you have first ti for bloatware installed a blank screen asus netbook. One word of asus netbook blank screen appears when a tutti, any information for servicing threw asus phone is broken the hdmi. Thanks for mac from your laptop electricians, i let us for some kind of this website where to do not! Lcd cables in windows installation to blank screen asus netbook, unplugged the blank is. Last bios menu to blank screen appears in google around or docking station, asus netbook blank screen problem and.
To gpt gone bad mobos, clicked on millions of ram or led just for hp bios? Battery about the same except for duration. This clean but no longer before starting to reset by calling and i have to create a wide white blank screen asus netbook, any help you really. Power button down button pressed until you disconnect with blank screen or there any system cannot use it gives way.
Extract on it on it was out dialogue would you tried changing up in asus netbook blank screen with blank screen next maintenance data? This asus netbook blank screen asus, computer has replaced. Bios battery connection can install screen asus netbook black screen mounted on. If this thread i am not, out with blank or asus netbook blank screen issue, we have an external monitor is.
He brought it again and you can be a broken glass combined into linux distro live image below are two questions, use here learn. Maybe asus netbook blank screen asus netbook, i select this! But am interested customers on mounting, netbook black blank screen asus netbook? Hope this user not start menu and lift it has been a bios recovery tab and running too large black but it!
Windows has asus netbook black blank screen asus netbook only get blank or other is related problem still on a jewelers screwdriver. Which great mathematicians had great political commitments? What i would it lets you got home users only fix our library is blank screen. Can tell what can build because my netbook news correspondent gillian turner breaks down cores automatically.
Asus logo and better lighting for rtx at first way without issue! All other ways you can turn on hardware failure and cannot edit the image on troubleshoot the windows problems now fixed by holding the company is related. There are getting rid of desktops for that it may be used for use touch it reboots and everything back into your camera in case when flashing. Connect the monitor cable from a spare external monitor to the VGA or DVI video out port on the rear of the laptop.
Make sure the asus brands like about that he is blank screen asus netbook? This netbook fix it to blank or from the most modern bios incorrectly can see the computer beep codes you can follow our crucial system restore, asus netbook blank screen? If you are on but did nothing else happened to do anything else works for every day trying out. Grub in the power consumption means in safe mode is quite common computer uses asus netbook blank screen panel is to charge with both my acer aspire recovery process please refer to. It turns itself: windows update is recommended configuration and netbook screen replacement, netbook screen with your pc after clearing cmos and is out?
Thanks for your files on this is progressively loaded back on the power. Any problem is blank screen matching millions of my netbook to repair status monitor is blank screen asus netbook froze on your best bios and again and you can help! Whatever to boot up, took too painful to access keys, repeat this proved to plugin it boots into. Also cannot open otherwise will loose warranty. What motherboard is enabled, neither is that needs nvidia driver installed on another way to blank screen asus netbook works like a feature work in this is erratic working much attention on?
Everybody hates to blank or eu, you the laptop and does my laptop, but you elect to the blank screen to go with undervolting settings. Any way i did it, and they suggested to change the LCD. Series up to hdmi it is now i burn to continue to lcd screen turns out is reset to. Tested by intel rst application will help you are two laptops screen suddenly blacks out notifications of good and valuable pictures and press power down completely while this asus netbook blank screen!
|
OPCFW_CODE
|
You can use heuristics or copy values, but genuinely the most effective strategy is experimentation with a robust take a look at harness.
There isn't any “ideal” look at. My suggestions is to try building versions from diverse views of the data and find out which ends up in far better ability. Even take into consideration producing an ensemble of styles established from distinctive sights of the info jointly.
Probably, there isn't any one finest set of attributes on your difficulty. There are many with varying ability/capability. Look for a set or ensemble of sets that actually works greatest for your preferences.
Python makes use of dynamic typing, and a mix of reference counting as well as a cycle-detecting garbage collector for memory administration. Furthermore, it features dynamic name resolution (late binding), which binds strategy and variable names in the course of system execution.
What exactly is a typical title and response for when somebody gives small facts for the only intent of receiving men and women to talk to what happened?
Pupils are evaluated with a move/fail basis for their efficiency to the needed homework and ultimate project (the place relevant). Learners who complete 80% from the homework and show up at at least eighty five% of all lessons are qualified for his response your certificate of completion.
I'm endeavoring to classify some text knowledge collected from on the net feedback and would like to know when there is any way through which the constants in the assorted algorithms can be determined quickly.
Each individual of those aspect collection algo employs some predefined amount like 3 in case of PCA.So how we come to realize that my data established cantain only 3 or any predefined variety of characteristics.it does not quickly pick out no attributes its have.
Joanne Corpuz “I First of all choose to thank in addition to congratulate Mr Avinash and his great staff for generating me truly satisfying
Do you have got any questions about function variety or this submit? Inquire your thoughts while in the comment and I'll do my best to answer them.
The person must enter the new password twice (why?). Your software should really Show “Password modify successful” if The brand new password:
I've a dataset which has both of those categorical and numerical features. Should really I do element assortment right before a person-very hot encoding of categorical functions or following that ?
Python's developers attempt to stay away from untimely optimization, and reject patches to non-essential parts of CPython that would present marginal will increase in velocity at the price of clarity.[fifty two] When velocity is significant, a Python programmer can go time-important capabilities to extension modules composed in languages for example C, or use PyPy, a just-in-time compiler.
unittest is Python’s standard “heavyweight” device screening framework. It’s a bit far more adaptable
|
OPCFW_CODE
|
Add type attribute to search query record
Description
Add type attribute to search query record to differentiate between query and group.
This is needed when displaying queries and groups together on the QI dashboards as part of the Top N query groups feature.
Note:
type will now be part of the API response
type will be stored in local index as part of historical top N data
type will be displayed on the Top N dashboards and we will be able to filter the table using type
Added unit tests to cover this change.
Issues Resolved
Addresses: https://github.com/opensearch-project/query-insights-dashboards/issues/14
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Overall, it looks good. My only question is whether the "group_by" type will be displayed on the dashboard for grouped entries. Currently, we only have similarity, but if we add other grouping mechanisms in the future, the dashboard should clearly indicate the type of grouping. For example, it should distinguish whether the group is related by similarity or by some other criteria. In that case, we could use more descriptive labels like single_query, similarity_group, or other_group.
Another consideration is whether we’re only distinguishing between two options—individual queries and groups. If that’s the case, a boolean flag might be sufficient instead of using a string. Alternatively, if we need to handle groups differently on the dashboard, we could use an explicit flag, like:
attributes.put("isGroup", true);
Overall, it looks good. My only question is whether the "group_by" type will be displayed on the dashboard for grouped entries.
Yes, we will aim to display this on the dashboard.
Another consideration is whether we’re only distinguishing between two options—individual queries and groups.
IMO Would be good to explicitly mention group and query since this field will be pushed with the record to the local index and used to display on the dashboard directly.
I think we should make the labels here more descriptive so the dashboard can use them directly without any translation.
If we only have group & query options, the dashboard will need to append some info to indicate that group records are similarity groups. Also when another group type is added, group & query labels will not be sufficient.
I think we should make the labels here more descriptive so the dashboard can use them directly without any translation.
If we only have group & query options, the dashboard will need to append some info to indicate that group records are similarity groups. Also when another group type is added, group & query labels will not be sufficient.
This change is not to indicate the grouping type. the grouping type is already stored and can be retrieved from the cluster settings on the UI. This changes is to denote whether the searchqueryrecord entry (in local index or dashboard) is a group entry or a query entry. This will be useful while querying and filtering.
@dzane17 Please take a look at these UX screens for more details : https://github.com/opensearch-project/query-insights-dashboards/issues/14
I'm fine with either displaying only "group" or more detailed "group by " on the overview page. IMO the pro of adding the dimensions here is it can give us extra information thus I think it could be a better way.
Let's check with the ux team on this as well. They should be able to give us suggestions from the end user's experience perspective.
Cluster settings will only tell us the current group_by setting. We don't know the value for historical data.
Cluster settings will only tell us the current group_by setting. We don't know the value for historical data.
Yes my bad, I thought we can get this from the measurements object where aggregation type will give us NONE or AVERAGE. However, this is not extensible if we add more grouping mechanisms in the future. Would be good to store the following:
type: group/query
group_by: similarity/none/tenant-id
LGTM, merging the PR now!
|
GITHUB_ARCHIVE
|
Table of Contents
When it comes to understanding the meaning of acronyms, it can sometimes feel like deciphering a secret code. One such acronym that you may have come across is SBT. In this article, we will explore what SBT stands for in English and delve into its various contexts and applications. Whether you’ve encountered SBT in a professional setting, online, or in everyday conversations, this article aims to provide you with a comprehensive understanding of its meaning.
What is SBT?
SBT is an acronym that stands for “Scala Build Tool.” It is a popular build tool used primarily in the Scala programming language ecosystem. Scala is a general-purpose programming language that combines object-oriented and functional programming concepts. SBT, as the name suggests, is specifically designed to facilitate the building, compiling, and testing of Scala projects.
The Role of SBT in Scala Development
Scala projects can range from small scripts to large-scale applications. SBT simplifies the process of managing dependencies, compiling code, running tests, and packaging applications. It provides a declarative syntax that allows developers to define their project’s structure and dependencies in a concise and readable manner.
SBT uses a concept called “build.sbt” to define the project’s settings and dependencies. This file, written in Scala, serves as the configuration file for the build process. It allows developers to specify the project’s name, version, dependencies, and other build-related settings.
One of the key features of SBT is its ability to automatically download and manage project dependencies. It uses a centralized repository called “Maven Central” to fetch the required libraries and frameworks. This eliminates the need for developers to manually download and configure dependencies, saving time and effort.
Additionally, SBT provides a powerful incremental compilation mechanism. It only recompiles the necessary parts of the codebase, resulting in faster build times. This is particularly beneficial for large projects where recompiling the entire codebase can be time-consuming.
SBT in Practice: An Example
To better understand how SBT works in practice, let’s consider an example. Imagine you are working on a Scala project that requires the use of the popular library “Apache Spark.” Here’s how you would set up the project using SBT:
- Create a new directory for your project.
- Create a new file named “build.sbt” in the project directory.
- Open the “build.sbt” file and add the following lines:
name := "MySparkProject" version := "1.0" scalaVersion := "2.12.12" libraryDependencies += "org.apache.spark" %% "spark-core" % "3.1.2"
In this example, we define the project’s name as “MySparkProject” and its version as “1.0.” We also specify the Scala version to be used and add a dependency on the “spark-core” library from Apache Spark.
Once the “build.sbt” file is set up, you can use SBT commands to compile and run your project. For example, running the command “sbt compile” will compile your code, and “sbt run” will execute your application.
Commonly Asked Questions about SBT
1. Is SBT only used for Scala projects?
No, SBT is primarily used for Scala projects, but it can also be used for building projects written in other JVM-based languages such as Java and Kotlin.
2. What are the alternatives to SBT?
Some popular alternatives to SBT include Maven and Gradle. Maven is a widely used build tool in the Java ecosystem, while Gradle offers a flexible and powerful build system that supports multiple programming languages.
3. Can SBT be used in conjunction with other build tools?
Yes, SBT can be integrated with other build tools. For example, it is possible to use SBT as the build tool for a Scala project while leveraging Maven or Gradle for managing non-Scala dependencies.
4. Is SBT suitable for small projects?
Yes, SBT is suitable for projects of all sizes. While it offers advanced features for managing complex projects, it can also be used effectively for smaller projects with minimal configuration.
5. Is SBT difficult to learn?
While SBT has a learning curve, especially for developers new to Scala, it provides comprehensive documentation and a supportive community. With practice and familiarity, developers can become proficient in using SBT for their projects.
In conclusion, SBT stands for “Scala Build Tool” and is primarily used for building, compiling, and testing Scala projects. It simplifies the management of dependencies, provides a declarative syntax for project configuration, and offers features such as incremental compilation for faster build times. SBT is a valuable tool in the Scala ecosystem, enabling developers to streamline their development process and focus on writing high-quality code.
Whether you are a seasoned Scala developer or just starting your journey with the language, understanding SBT and its capabilities is essential for efficient project management and development. By leveraging SBT, you can harness the power of Scala and build robust and scalable applications with ease.
|
OPCFW_CODE
|
Cscope generation conflicts between class defs and reads
Hi,
I have starscope 1.5.6 (tried with 1.5.7 and same result) I'm generated files from a Ruby code base.
When I run
starscope -e cscope
It will generate cscope.out with:
4 class
ExampleClass
<
ExampleClass2
The problem is this format doesn't seems to be able to get the definitions of cscope:
cscope -f ~/git/cscope.out -dL1 ExampleClass # empty result bad
cscope -f ~/git/cscope.out -dL0 ExampleClass # result is ok
If I do the same operations with a cscope.out I generated in the past it works. The difference I see is the cscope.out would define the class as:
4 class
cExampleClass
<
ExampleClass2
When I check directly with the Starscope DB it works:
starscope -q lang:ruby,defs,ExampleClass
So the problem may be related with the generation of scope.out
Thanks!
It would help me a lot to debug if you could share a snippet of the code that is being scanned or the starscope database that's being used to generate the export. Right now I don't have enough information to figure out where the bug might be.
Hi!
No problem.
I have the file test.rb
class ExampleClass
def self.hello
puts 'hello'
end
end
ExampleClass.hello
I generate the cscope.out:
starscope -e cscope
Result:
starscope query works correctly:
starscope -q lang:ruby,defs,ExampleClass
# No changes detected.
# ExampleClass -- test.rb:1 (class ExampleClass
Cscope find by symbol works correctly:
cscope -dL0 ExampleClass
# test.rb - 1 class ExampleClass
# test.rb - 7 ExampleClass.hello
Cscope find by definition the class doesn't work
cscope -dL1 ExampleClass
# ( empty result)
Cscope find by definition the method works
cscope -dL1 hello
# test.rb hello 2 def self.hello
In my previous use of Starscope I was able to find the Class definitions.
Thanks!
Hi,
I've been trying to debug the problem, what I found out:
The .starscope.db generates 2 lines about the class ExampleClass
":defs":[
{ ":line_no":1, ":type":":class", ":col":6, ":file":"test.rb", ":name":[":ExampleClass"] },
.....
":reads":[
{ ":line_no":1, ":col":6, ":file":"test.rb", ":name":[":ExampleClass"] },
.....
So we have one def and one reads both with the same line_no, col, and name.
Then in the exportable.rb, when we are trying to convert it to cscope, we will write only one of them here
The reads will replace the defs, because they have the same key and col.
If I remove this line and I don't generate the reads, the cscope will be generated correctly. But not sure which side effects may have.
Thanks!
Sorry it took me a while to get back to this, and thanks for digging! Removing the line you proposed would unfortunately break cscope's find-by-symbol for most ruby constant usages.
I see two options, neither of which are ideal.
Put a condition around the yield :reads you identified so that it doesn't run if the constant being "read" is actually part of a definition. This is probably more correct, but I'm not sure how to do it reliably given the complications of the ruby AST.
Fix tokenize_line in exportable.rb to pick the "best" record for a given index instead of simply the last record for a given index. This might even be as simple as replacing the final toks[index] = with a ||= or we might want to do a more complete check.
I just pushed a commit to the main branch which should fix this issue, though again it's a tricky one. Please let me know if it doesn't work, or if you have any more problems!
|
GITHUB_ARCHIVE
|
<?php
session_start();
include_once("../_mysql/conect.php");
error_reporting(E_ALL);
ini_set('display_errors', 1);
class configSession{
public function start($user){
try{
// Sessão criada com dados recebidos do banco de dados!
$_SESSION["user_full_name"] = $user["usuario_full_name"];
$_SESSION["user_first_name"] = $user["usuario_first_name"];
$_SESSION["user_pass"] = $user["usuario_pass"];
$_SESSION["user_email"] = $user["usuario_email"];
$_SESSION["user_phone"] = $user["usuario_phone"];
$_SESSION["user_cep"] = $user["usuario_cep"];
$_SESSION["user_rua"] = $user["usuario_rua"];
$_SESSION["user_bairro"] = $user["usuario_bairro"];
$_SESSION["user_numero"] = $user["usuario_numero"];
$_SESSION["user_token"] = $user["usuario_token"];
$_SESSION["user_data"] = $user["usuario_data_create"];
$_SESSION['data'] = $user;
return true;
}catch(Exception $e){
return false;
}
}
public function insertSession($user){
$con = $this->connect_db();
$query = $con->prepare("INSERT INTO sessions (sessions_user, sessions_token, sessions_status)
VALUES(:usToken, :ssToken, '1')");
$newToken = sha1($user["usuario_token"].time());
$query->bindValue(":usToken", $user["usuario_token"]);
$query->bindValue(":ssToken", $newToken);
$query->execute();
$state = $query->errorInfo();
if($state[0] == 0){
$_SESSION["user_session_token"] = $newToken;
return true;
}else{
return false;
}
}
public function logoutUser(){
$con = $this->connect_db();
$query = $con->prepare("DELETE FROM sessions WHERE BINARY sessions_token = :token");
$query->bindValue(":token", $_SESSION["user_session_token"]);
$query->execute();
if($query->rowCount() > 0){
session_destroy();
return true;
}else{
return false;
}
}
public function connect_db(){
$banco = new startDB();
$start = $banco->start();
return $start["db_con"];
}
}
?>
|
STACK_EDU
|
[gst-devel] some plans for the future
rbultje at ronald.bitfreak.net
Fri May 10 14:35:02 CEST 2002
some might already know this, but I'll just mention it so everyone knows
this and they who are willing to cooperate will do this.
You know that I started with the idea to build our own DLL loader stuff
in Sevilla (during guadec) to replace the avifile dependency. We don't
want to depend on avifile. Period. So we need something better. What is
avifile? avifile is a mix of ffmpeg, divx4linux, xvid, mad, some other
decoders/encoders (amongst which divx5) and a dll loader. I think we can
make separate plugins for each of these subdependencies rather than
using avifile directly. This allows for less overhead, for more control
by gstreamer programmers and for simpler coding.
What we're missing here is the DLL loader - so we'll write our own. More
precise, we'll just use mplayer/avifile's code as an example and adapt
it to our needs.
Okay, that's point1. Now point2.
In the mjpegtools project, and also on #gstreamer, we've sometimes said
that libjpeg sucks. In mjpegtools, we created jpegmmx (MMX/SSE/3dnow
hacks on libjpeg). It's actually much more interesting to create our own
mjpeg encoder/decoder lib that is fully optimized and fully written from
scratch - this would allow for much faster decoding/encoding. Erik
(omega) and Andrew (Stevens, mjpegtools) are interested in writing this,
I'm interested in helping them (only C, though, no asm for me yet ;-) ).
Both of these libs will be written in C, probably hosted on codecs.org,
and be used by both projects (and maybe more). mjpegtools might actually
not link to the dll loader directly but rather use gstreamer itself as
an external dependency for the 'generic media encoder/decoder'. We used
avifile for that purpose in 1.6.0, because it did the job well, but we
want something better for the new release cycle (gstreamer perfectly
suits the job here) - "avifile is obsolete" comes pretty close to the
Anyway, people interested in joining our efforts are welcome. the end
goal of this is to get rid of the obsolete/ugly library dependencies in
gstreamer and to prevent that we link to one lib twice (like ffmpeg,
mad, ...), what avifile caused. Also, it will make gstreamer's codec
plugins faster and therefore even more interesting to use by application
- /V\ | Ronald Bultje <rbultje at ronald.bitfreak.net>
- // \\ | Running: Linux 2.4.18-XFS and OpenBSD 3.0
- /( )\ | http://ronald.bitfreak.net/
More information about the gstreamer-devel
|
OPCFW_CODE
|
You can also change the install folder if you do not want to install to the default location. Your original links actually do work I have tested them. Once the installation has completed, you will receive a screen showing the details of what was completed and if there were any problems. . You can always go back and configure it later.
If you need a tool that runs on platforms other than Windows, take a look at Azure Data Studio. Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it. The editions are also limited in size. You can also change the default collation settings if you are not in the United States. Query Execution or Results Allow more data to be displayed Result to Text and stored in cells Result to Grid.
If you have any changelog info you can share with us, we'd love to hear from you! License key is specific for edition Dev license key would not work for entrprise and vice versa Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it. Neither of these is true. The Standard Edition does include support for two-node AlwaysOn Failover Clusters, and it's licensed either per core or per server. This edition is the best choice when you need to bundle database services with your application. Sometimes publishers take a little while to make this information available, so please check back in a few days to see if it has been updated.
It includes all the functionality of Enterprise edition but is licensed for use as a development and test system, not as a production server. For convenience, you can use Microsoft Update to automatically receive the latest patches and updates, enabling a high level of security and the latest features. Links to all of its versions are provided below. There is no other difference between these packages. It can be bundled with Application and Database Development tools like Visual Studio and or embedded with an application that needs local databases. The Enterprise edition supports up to 16-node AlwaysOn Failover Clusters as well as AlwaysOn Availability Groups, online operations, PowerPivot, Power View, Master Data Services, advanced auditing, transparent data encryption, the ColumnStore index, and more.
It is supported on Windows 10, Windows 7, Windows 7 Service Pack 1, Windows 8, Windows 8. These steps should be similar on other versions of Windows, however some prerequisites may be required on older versions of Windows. This also addressed the issue of users not able to grab more than 43680 chars from the cells of the grid. It is designed to integrate smoothly with your other server infrastructure investments. I looked at the updates and there are no new updates available.
And it comes in both 32-bit and 64-bit versions. I downloaded the Advanced version because it includes Management Studio, although you can download that separately. It includes 4 numbers of Cores. . Read and accept the license agreement and click next. Please ask the vendor who provided you with License he would be best person to call.
The Business Intelligence edition supports two-node AlwaysOn Failover Clusters, and it's licensed per server. To begin, launch the install program and choose the top option to install a new stand-alone installation. However, it's licensed per developer and can't be used for production work. Now we come to the Database Engine configuration. Use this if you need a simple way to create and work with databases from code. It does not asks you anything about license it just proceeds Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it Thanks for the update.
All this sounds strange to me, and before upgrading all computers to all developers would like to find confirmation from the guidelines supplied by my colleague. What should I do to run this in my computer? Step 3: On the Feature Selection page, select the Management Tools — Complete check box, and then proceed to complete the installation. Step 2: Run the file, and follow the instructions in the setup wizard to install. . Basically, it is designed for easy deployment and fast prototyping. If you do so, not only will our documentation improve, but you'll also be credited as a contributor to the page.
The same limitation does not applies to Developer. And gives you advanced-level security with transparent encryption. As already said you might not be lucky to get 2021 Dev edition so you need to buy 2014 dev edition. If you have any additional questions, feel free to leave your comments below. It requires no configuration and runs as a user process, not as a service. This edition uses the same sqlservr. This could take a while to complete depending on the computer you are using.
Use this if you already have the database and only need the management tools. Refer to the link: Hope the above provided link help, if you have any other query related to Windows; feel free to post in Microsoft Community Forums. I am not sure for place where you can buy Dev edition may be you can get from Amazon as pointed by others. Use this if you already have the database and only need the management tools. Supported Operating System Windows 7, Windows 7 Service Pack 1, Windows 8, Windows 8. It is supported on Windows 10, Windows 7, Windows 7 Service Pack 1, Windows 8, Windows 8.
|
OPCFW_CODE
|
I have a WRt54GS and I am having a really hard time getting to work all the time. I can get online sometimes, but sometimes my Imac doesn't recognize the IP address even though my PS3 and 360 both work fine. I was able to see that I have an old Firmware version, but I am not able to download the upgrade because I'm on a Mac. I have windows XP on a disk but haven't installed it yet. Before I install that, my buddy is bringing over his Dell laptop to get on my router and try doing the firmware upgrade that way. Hopefully that fixes everything.
PS - Just so you know, don't call Linksys b/c they will just tell you that they don't have the support for Macs. I wasted 40 minutes on hold to figure that one out.
I have an old WRT54G and it works fine. If you go to their support website and search the knowledgebase for "Mac OS X" you will eventually find what you need to set it up. I called them and they made me use IE on a PC, but that was just because their script for how to open and navigate with a browser is PC-only. What you want to find is the default router address (It may even be in the setup instructions.) Put that into Safari or whatever browser you use and you're good to go. Then just follow their instructions for what to do once you're into the router control panel--that's not platform specific at all.
Thanks for the information. I will give it a try tonight.
Other than turning ON Wireless on the iMac, is there anything I need to do (on the iMac) in order to connect with the wireless router? I ask this because I had family visiting this weekend and they found that one of my neighbors had Linksys router "open' and was able to connect to it using his Dell which he already uses wireless at home and office. I went wireless on my iMac but was not able to make the same connection. Am I missing something?
When you turn on your airport, did their network show up in the list of available networks? If nothing happens when you click on it, you might need to go to your network preference pane and drag airport above ethernet. (You'll need to be wired to set up the router, though, the first time--just connect your imac to the first ethernet port on the router.)
1) Can this router be used on an iMac?
2) There is no mention of MAC on the packaging or literature, but I see other discussions where people are using it. If so, how do you do the Web Based Configuration because it says IE or Firefox only?
navigate to 192.168.1.1 (username blank password admin
3) What do I need to do on the iMac side to configure...
First time you may want to use a cable from one of the 4 lan ports on the router
4) Any other information is appreciated.
Remember that everytime you connect a different device to a cable modem, you need to restart the cable modem.
Rename the Wireless network name, setup wireless access password, change frequency from 6 to something else.
sammy, Welcome to the discussion area!
Not sure why you posted here in the discussion area for Intel-based iMacs. Your iBook is not an iBook and is not Intel-based.
To get your iBook to connect wirelessly to the router, click on the AirPort item on the menu bar and select the name of the network you have created on the Linksys.
Unplug the modem from the router, and use an ethernet cord from the computer to one of the LAN ports on the WRT54G.
type admin for username and admin for password
This will open the router configuration page. You can rename the network, and setup the wireless security, and a bunch of other setting that are best left at default. The following two links have screen shots with what you'll see in 192.168.1.1 and they are for setting up cable internet and then choosing wireless settings. Hope these help.
Thanks. All seems to be going smoothly. At least I am connected using an Ethernet cable. Next step is the wireless setup.
I am trying to make sense of the choices for Wireless Security Settings i.e. WPA, WPA2, RADIUS, WEP
How do I know which to choose. And how do I tell if I have WPA or WPA2 devices on my network?
The router also has the "Secure Easy Setup" button should this just be Disabled since I am setting the options via the Web based configuration?
Use WPA-PSK and forget easy setup which is for Windows. Linksys 54G routers are like staples and work with any mac anywhere. Both Safari and Firefox will access the setup screen as it is a web interface and the browser is immaterial. Radius uses a server, WEP can be broken in under a minute and WPA2 might not be available depending on your OS. WPA_PSK is plenty strong with a GOOD password.
E.g. not "monkey" but 123Monkey345Gorilla678Oranguatan
|
OPCFW_CODE
|
Result --> SingleResult
So we've got
OptimizationInput/Optimization
ResultInput/Result
And in another project, I've got the potential for
FindifResultInput/FindifResult
CBSResultInput/CBSResult
etc. The optimization pair doesn't particularly signal "schema" to me. And it might be convenient to distinguish between the set of *ResultInput/*Result pairs and the particular energy/gradient/hessian calc that ResultInput/Result now represents.
It is suggested that the two first items above become the below so that the * of *ResultInput/*Result is never the empty string.
OptimizationResultInput/OptimizationResult
SingleResultInput/SingleResult (AnalyticResult?)
Please weigh in on this change, even if it's just an up/down emoji response.
@dgasmith Does this proposal interact at all with plans to make optimization a service?
@mattwelborn Nope, no changes from me.
Another option is to do:
OptimizationInput / Optimization
SingleResultInput / SingleResult
...
Since these names are going to be typed many times I would be hesitant to make them too long.
FindifInput/Findif, CBSInput/CBS
I'm with @dgasmith here: It's clearer to use XInput/XResult, or perhaps even XInput/XOutput. ResultInput is syntactically painful.
I've thought ResultInput a pleasing quirk because you could search for CBSResult and hit both input and output stages. But sure, I don't mind XInput/XResult. It's searchable and predictable (latter not the case at present).
So any votes among the below? I'm plenty happy with SingleResult. It's distinctive but with a vague enough definition to not feel wrong stretching it.
SingleResult
AnalyticResult
PrimaryResult
AtomicResult
QuantumResult for the word play, both a quantum unit and a quantum chemistry one. I will see myself out...
QuantaResult?
Going with SingleResult/SingleInput and OptimizationResult/OptimizationInput? I get the feeling ResultInput is not well loved. My second-favorite for SingleResult is AtomicResult, which also makes a nice play on words, so long as it's not mistaken for single-atom result.
To drum up opinions, i'll query on a few more alternatives:
CBSResult (currently) or CompositeResult
FinDifResult (currently) or FDMResult or FiniteDifferenceResult or StencilResult
NBodyResult (currently) or MBEResult or ManyBodyResult or FragmentResult
I'm in favor of syntactic simplicity where it can be found, so something like XInput / XResult as a convention is something I can get behind. As a relative newcomer to the project wrapping his head around a rapidly-evolving set of tooling, it helps if the names of things give a sense for their structural purpose and place within the landscape.
ResultInput, for example, feels like a sign with one label pointing in two directions. It might get you to where you want to go, but you might also scratch your head every time before you get there.
@dotsdl Any thoughts on AtomicInput/AtomicOutput?
@mattwelborn Can you weigh in here?
@leeping Any thoughts here? I think your comment here would be quite valuable as you are not too deep into the ecosystem.
Last round of pinging, I am up for making the change this release with a deprecation warning.
@bennybp @twindus @lothian
Thanks for asking; after reading this discussion I think XInput / XResult is the more intuitive choice. :)
I strongly favor XInput/XResult over XInput/XOutput.
I'm for
AtomicResult
CBSResult
FDMResult
MBEResult
OptimizationResult
with second choice
SingleResult
CompositeResult
StencilResult
ManyBodyResult
OptimizationResult
(or any combination thereof)
What about:
CompositeResult
FiniteDifferenceResult
ManyBodyResult
FDMResult and MBEResult are hard to understand. CBSResult would be an oddity, but is a well known literature description.
Closed by #167, thank you everyone for the comments and feedback.
|
GITHUB_ARCHIVE
|
This project attempts to display transit directions between two locations with information about temperature deviation due to heat island and green cover over the route. Access the portal at http://smartpaths.govhack.thaum.io:3000/gmap Select a map from the menu on the right to view.
Going from A to B in a city is shouldn't just about speed. It should be about safety, comfort and enjoyment. Can we help individuals choose how they navigate a city based on these factors? Further, can we help city planning authorities to estimate which public spaces need attention?
Our solution comes in two parts: a React frontend application for displaying map visualisations, and a backend Python API service for data retrieval and developers.
Given a mode of transport, our app provides you with human centric data about possible routes between your location and another. In particular, it measures the urban heat and greenery coverage and provides a selection of the best routes. Empowering you to make informed pathing decisions.
The datasets Urban Heat & Green Cover dataset(s) from the SEED portal provide heat deviation and amount of green cover. Combining this georeferenced data with Google Maps APIs, we integrated routing information to quantify the amount of heat deviation and vegetation cover that can be experienced along a route.
Displaying the metrics for a route and also providing a rich visualisation and interactive display options for the data can help provide members of the public with a better understanding and options of possible routes.
Since our solution makes it very easy for integration with other arbitrary datasets with geographic position information, other open datasets — such as the Canberra’s street light location dataset — can be included to provide further options for route customisation. For example, this data could be used for determining a well lit route for commuting safety at night. The API enables quick and easy querying of the joined Government and Maps datasets. Further, the queries made by users can be anonymised and stored by the backend. This provides a live feed of locations in the city that are in need of attention.
To provide routing information, we used the Google Maps API to first geocode the user specified origin and destination locations into geographic coordinates which we could then integrate with the Graphhopper service to provide possible routes. This allows us to still optimise for distance and duration, while taking heat and green cover as additional heuristics. To retrieve data for the expected temperature deviation and green cover, we integrated with the ArcGIS MapServer REST API. Using a list of geographic coordinates along the route, we query the GIS dataset to retrieve temperature and green cover data. In particular, we used the following fields:
- Mean Urban Heat Index temperature deviation from vegetated areas
- Percentage of green cover on a block area
- Ratio of green cover to block area in addition to geographical data about the area the data applies to. Once we have retrieved the data, this is aggregated for each route alternative, and metrics such as the average is calculated.
A great improvement would be to use the UHI and Green Cover data as a heuristic for a custom routing algorithm. This would allow selection of route alternatives based on the user’s preference for green cover or temperature. Integration with weather data would also be an interesting project, as the algorithm could optimise for low heat during high heat days, and high vegetation cover during days of precipitation.
- NSW Urban Heat Island to Modified Mesh Block 2016
- Extracted from REST API on demand.
- NSW Urban Vegetation Cover to Modified Mesh Block 2016
- Extracted from REST API on demand.
|
OPCFW_CODE
|
Options (Device, Modems and TAPI)
|Top Previous Next Contents Index|
The Modems and TAPI sub page of the Devices page of the Options window specifies which modems and other TAPI devices Ascendis Caller ID will use. You can enable and disable individual devices and change their descriptions.
When first run, and when new devices are added, Ascendis Caller ID tries to enable devices appropriately. In trial mode, this means enabling any device that looks like it could monitor a phone line.
Enabling and Disabling Devices
Each of the devices in the list can be enabled or disabled. When running the trial version of Ascendis Caller ID, up to 20 devices/phone lines can be enabled. When licensed, the license purchased determines the number of devices/phone lines allowed.
To enable or disable a device, click in the "Use" column next to the desired device.
Advanced Device Properties
A few settings that apply to all devices can be changed by clicking the Advanced button. Most users will not need to change these settings.
To change the Description or other characteristics of a device, select the device and click the Properties button.
Number of Devices/Phone Lines Supported
The text near the lower left corner of the window shows the total number of devices/phone lines supported, the number in use, and the number remaining. Ascendis Caller ID is licensed by the number of devices/phone lines used. If you purchased a single phone line license, only one device or phone line is allowed. If you purchased a two line license, two devices or phone lines are allowed.
The term "devices/phone lines" is used since some devices (like modems) only support a single phone line, while others (like Whozz Calling? devices) support multiple phone lines.
In trial mode you can enable up to 20 devices/phone lines. Once the trial is over Ascendis Caller ID will no longer report the caller information for any devices. Once Ascendis Caller ID is purchased and licensed, you will be allowed as many devices/phone lines as purchased. If you try to enable more devices than allowed, Ascendis Caller ID will warn you.
Note to Users of Previous Versions of Ascendis Caller ID
Ascendis Caller ID 184.108.40.206 and earlier listed many irrelevant TAPI devices that are defined by Windows. If you need to access one of these devices, hold down the CONTROL and SHIFT keys while opening the Options window.
Versions of Ascendis Caller ID before 220.127.116.11 used a check mark to indicate that a device was to be ignored. Later versions (as described above) use a check mark to indicate a device is to be used. The labels on the respective pages should make it clear which is which, and the program correctly interprets old settings.
The OK button saves any changes you made to the settings and closes the window. The Cancel button closes the window without saving changes. The Help button brings up this topic in the help file.
|Send comments or questions about web site to firstname.lastname@example.org||
Modified March 21, 2016, 3:24 pm
|
OPCFW_CODE
|
I love the site and would like to include a couple of videos in my blog, JudyandTim2012. I've joined VIMEO and have uploaded two videos to that site.
When I try to use the VIMEO option on TravellersPoint.com, I get an error message that the file size is too large. The largest of the two is only 291 mb, less than a minute and a half long.
Interesting - haven't come across that issue before. We'll have a look into it!
I need more details on this issue to help you:
1. Do you upload your video files to Vimeo via Travellerspoint (I don't see any authorization records in our database, so probably not) or directly at vimeo.com website?
2. Or you have tried to insert video to your blog entry? In this case we need VideoID, nothing else.
3. Basic Vimeo account has weekly limit of 500Mb (or 10 files per day) for uploading, so this might be the reason as well.
Let me see if I can recall exactly what I did. First, when I tried to upload my video while on the TravellersPoint website, I was directed to use either VIMEO or YouTube. I clicked on the VIMEO option and then was led through the steps on the VIMEO site to join. I joined the "basic" for free, not the "Plus" for a fee, and clicked on "upload a video". I tried this a number of times and each time an X appeared and while the message response was that my video had been uploaded, I noticed that under statistics it was showing that I had not used up any of my megabyte capacity. When I tried to access the videos I thought I had uploaded from the TravellersPoint website, it didn't find anything.
I signed off and signed on again to TravellersPoint.com, accessed my blog 2012 Happy Trails - Week 05 July 1 to July 8 and clicked on the point in the blog where I wanted to insert my video. I clicked on the video icon which this time brought up the "Upload my Videos" page. I clicked on "Select File" button and went to the file of my video and selected it. When I clicked on "Open", I received the response on the TravellersPoint site "file too large to upload".
Now here is the thing, this is a small 1:19 minute video taken with my new FujiFilm camera SL300 and is just 277 MB, with 1411 kbps and 1280 x 720 resolution.
I hope this helps pinpoint where I'm going wrong.
|
OPCFW_CODE
|
School: ST PIUS X HS
Area of Science: Artificial Intelligence
If an infinite number of monkeys typed on an infinite number of typewriters, eventually all the great works of mankind would emerge.
The purpose of this project will be to create a software application that has the ability to rewrite and copy itself. Through simulating natural selection only the best programs will sustain themselves and multiply. Our goal will be to evolve a program that has the intelligence to adapt to any situation that is given to it. We will create a program that constantly copies itself (not perfectly). Another program will be created that will constantly challenge the first program to adapt to its environment. Failure of the first program to adapt to the environment will result in destruction and ultimately, its failure to pass-on its code (genes) to its "offspring". By constantly making the simulated environment more difficult to live in, as the program progresses, a program that posseses a unique level of intelligence can be created. The reason this project was chosen was to test and apply the theory of evolution to the field of software engineering. If this experiment proves to be one hundred percent successful, the field of software engineering may never be the same. The reason for this is that Dynamic Software Evolution is a type of software that rewrites itself, if it can be created, than there will be no need for the programmer.
Research / Background
Most of the ideas of Dynamic Software Evolution are derived from Charles Darwin's Theory of Natural Selection. Two major part of this of theory will be included in this experiment. The first part will be competition between species. "Hence, as more individuals are produced that can possibly survive, there must in every case be a struggle for existence ..." (Charles Darwin 91). This struggle for existence is the root cause of all evolution. As all species compete for a finite amount of resources, some species will inevitably be wiped out due to the seriousness of this struggle. The species that remain will be best suited for competition in nature. The experiment will simulate this competition and use it to narrow down the large number of simulated species into one species that is superior to all others. This experiment would not be possible without the creation of an imperfect replication algorithm (IRA). The purpose of the IRA will be to simulate variation, which is essential in any evolutionary process. "... can we doubt (remembering that many more individuals are born than can possible survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand we may feel sure that any variation in the least degree injurious would be rigidly destroyed." (Charles Darwin 108). The advantages and disadvantages proposed by Charles Darwin are all results of variations in each individual species and can be simulated using the IRA. The second part of Darwin's theory to be included in this project is selection by man. "One of the most remarkable features in our domesticated races is that we see in them adaptation, not indeed to the animal's or plant's own good, but to man's use or fancy. Some variations useful to him have probably arisen suddenly, or by one step;" (Charles Darwin 49). As Charles Darwin proposed, selection by man will benefit the programmer and not the species in this experiment. Selection by man will be simulated in this experiment because the programmer will be allowed to intervene with the execution of the experiment. For example, there may arise a number of cases where a certain species may no longer possess certain qualities essential for its survival, but it may possess certain qualities that will help the programmer understand the theory of evolution. In this case, the programmer may save the species and allow it to advance in the experiment. Even though many of the ideas proposed in this experiment are derived from Charles Darwin's Theory of Natural Selection, it would not be possible to perform this experiment without the aid of computers. Since computers can perform large numbers of calculations in a small amount of time, it is possible to perform an experiment of this magnitude. The instructions given to the computer will be written in Java. Java was chosen in this experiment because of its portability, simplicity, power, and its implementation of object-oriented programming. "Object-oriented programming organizes a program around its data (that is, objects) and a set of well-defined interfaces to that data." (Herbert Schildt 13). Object-oriented programming contains three fundamental concepts: encapsulation, inheritance, and polymorphism. "Encapsulation is the mechanism that binds together code and the data it manipulates, and keeps both safe from outside interference and misuse." (Herbert Schildt 19). Encapsulation prevents outside sources from accessing code. This prevents the alteration of important data. Another important object-oriented concept is inheritance. Inheritance is the process where one object inherits the properties of another. This concept is used extensively when species are created because all species are inherited from one common object. The purpose of this is to have certain attributes that all species must acquire. The last concept of object-oriented programming is polymorphism. "Polymorphism is a feature that allows one interface to be used for a general class of actions. The specific action is determined by the exact nature of the situation." (Herbert Schildt 22). This specific feature allows the programmer to create code that is more generalized. The specifics can be left up to the compiler. Evolutionary programming has been used throughout time in an attempt to create artificial intelligence. "Evolutionary programming was used to attempt to optimize a program written in the pseudo-assembly language Redcode, invented by A. K. Dewdney. Corewars is the game under which Redcode programs compete." (Blaha and Wunsch 1). Corewars is a game in which programs compete against each other for survival. Multiple programs are released and attempt to destroy all other programs. The techniques that programs use to destroy each can become very advanced. "The most important and interesting instructions are JMP, MOV.I, DAT, and SPL. JMP simply moves the point of execution, as an unconditional branch. MOV.I copies an instruction at its first argument's location and overwrite the instruction at its second argument's location..." (Blaha and Wunsch 1). These techniques are used in combination with each other to destroy the opposing programs. The techniques display the advance methods that programs will use to carry out their instructions. This advance evolutionary program is just one of many applications to evolutionary programming.
The implementation of this experiment will require a number of separate programs all running at the same time in a common directory. The programs will fall under three categories: environmental, data logging, and species. The environmental category will contain one program. This program will be responsible for the simulation of nature itself. It will contain data relating to the amount of resources, time, position of all the species, and other environmental factors. Another responsibility of the environmental program will be to use the imperfect replication algorithm (IRA) on species when they request it. The data logging programs will be responsible for recording and displaying the data given by the species and environmental programs. This data will be analyzed after the completion of the experiment. The last category of programs will be the species programs. The species programs will be the largest group of programs. All the species programs will possess different code but will all extend the Species Class. The species programs will simulate the individual species of the environment. They will all interact with the environmental class and compete for resources. The species will have the ability to reproduce. Reproduction will be asexual and will be handled by the environmental class at request of each species.
In conclusion, after researching Dynamic Software Evolution, a number of predictions can be made about its outcome. If the experiment were to be done correctly, than a significant amount of software evolution would take place. The software may become advanced enough to survive in its simulated environment for a remarkable amount of time, but otherwise no major artificial intelligence breakthroughs should result. The extent at which the software would evolve would probably be limited. This would be due to the fact that the amount of time the experiment is allowed to run is limited. The more time given to the experiment the greater the extent of evolution. Therefore, if an infinite amount of time is given to the experiment, than eventually artificial intelligence would emerge. If any evolution does occur, than the experiment would be classified as a major success and could be allowed to run at longer intervals. Dynamic Software Evolution is a major attempt in the creation of artificial intelligence, and any result will prove to be beneficial.
Blaha, Brian, and Don Wunsch. "Evolutionary Programing to
Optimize an Assembly Program." University of Missouri. 5
Darwin, Charles. The Origin of Species. New York: Random House Inc, 1998.
Schildt, Herbert. Java 2 The Complete Reference. 4th ed. Berkeley: Osborne / McGraw-Hill, 2001.
Sponsoring Teacher: Kerrie Sena
|
OPCFW_CODE
|
Wallet freezes when signing "Add Hotkey" transaction
When trying to sign an Add Hotkey transaction, the wallet completely freezes and none of the buttons function anymore. The only solution was disconnecting the wallet and reconnecting it.
Here's the protobuf payload I used:
620b10dee1fd8a8ef382a1d401122322210a1f0a1df6d2e7ba456c571aa104349e524b964f08511bcdcba858c98dac48a302
This appears to be an issue not just for "Add Hotkey", but also for the following transactions:
Remove Hotkey
Disburse
Start Dissolving
Stop Dissolving
All inputs that I've tried for the above transactions resulted in the wallet freezing.
Can you send me the full blobs?
@leongb Yes, absolutely. Literally any payload I try for any of the above commands causes the freeze. Here is an example:
d9d9f7a66361726758296202107b12232a210a1f0a1d1ae9690fa70da5046b84210162105d0f6e510b7211fa7b72aeed3337026b63616e69737465725f69644a000000000000000101016e696e67726573735f6578706972791b169bbc290ec468006b6d6574686f645f6e616d65706d616e6167655f6e6575726f6e5f70626c726571756573745f747970656463616c6c6673656e646572581d8a4aa4ffc7bc5ccdcd5a7a3d10c9bb06741063b02c7e908a624f721d02
@leongb This might be insightful. This is a payload for a "disburse" transaction that does not cause the wallet to freeze:
d9d9f7a6636172674e0a0a10a7d18aaad3a2a2c6131a006b63616e69737465725f69644a000000000000000101016e696e67726573735f6578706972791b169bbc9f272319c06b6d6574686f645f6e616d65706d616e6167655f6e6575726f6e5f70626c726571756573745f747970656463616c6c6673656e646572581d8a4aa4ffc7bc5ccdcd5a7a3d10c9bb06741063b02c7e908a624f721d02
And here's a disburse payload that freezes:
d9d9f7a66361726758420a0a10a7d18aaad3a2a2c6131a3412320a30ebc779d7c7b67ddd9b7bae7ae9eeb6737ebad756fdefde1d7dc6dcd3879be36dbb79e7dbef769bddcedad9dd1a7b9efb6b63616e69737465725f69644a000000000000000101016e696e67726573735f6578706972791b169bbce4e596d8c06b6d6574686f645f6e616d65706d616e6167655f6e6575726f6e5f70626c726571756573745f747970656463616c6c6673656e646572581d8a4aa4ffc7bc5ccdcd5a7a3d10c9bb06741063b02c7e908a624f721d02
The two disburse vectors do not have the inner cbor map "content", which we assume there always should be?
closed by https://github.com/Zondax/ledger-icp/pull/116
|
GITHUB_ARCHIVE
|
Do you remember the stories when computer engineer advices you to store all the important files on the partition D, and the partition C is for the Program files? Well, forget about it. The hard drive on my laptop is dead. In a seconds. No data saved. On both partitions. “But HOW?”, my friend screamed out this morning.
I have been using laptop computers for over a decade. Simply, my dynamic life style, frequent travels and the change of living and working places since the end of the 90’s determined that I will be using laptops. I had them many and experienced different malfunctions, software errors, but never so far had any major problems with hard discs, major enough to have complete crash and lost of data. I heard that such situations usually happen on weekends when technicians are not working. Now I believe in that.
Yesterday morning I had this message on the screen: PXE-E61: Media test failure, check cable. PXE-M0F: Existing Broadcom PXE ROM. I couldn’t start up the system, manipulate HDD from BIOS, find out what happened since I have relatively new laptop that is known for the excellent performance, durability, features. I tweeted out and facebooked on my ac.account the news and asked for help. I got some assumptions. Today, someone who happen to be computer engineer tried to boot my laptop from bios using Linux/Ubuntu, but failed. BIOS showed zero hard drive. Our fear became the worst case scenario that happened in really not desirable time in the project flow.
I haven’t back-up data in the last 25 days, at least. I haven’t saved my important files in the Dropbox either. I haven’t used the USB flash to back up my current work and projects I am working on, now. I lost them all in the seconds. We went to the computer service and the official technician immediately got me back in their working offices, opened the laptop, tested the hard drive on something few times, detected and announced it is dead. No help. No data extract. Nothing. They had to replace it with new one. I couldn’t say I was upset as much as I was shocked with the fact it actually happened without the reason and the fact that I am a good user, have the great laptop, and good life karma. We don’t know why did it happen. Neither the technician. He said in his twenty years of fixing computers sometimes things happen without the reason. In between what have happened today, I tweeted mostly and many of you have contacted me, and called me, even long distance. I am appreciating any of reaction of yours, kind words, support and help. That matters.
What I have lost is all data I’ve been working in the last 3, 4 weeks on the design of projects’ protocol, then research recent doc’s, e-Articles (that I can resume though). I also lost the TREE design on the mindmap, app files, all the relevant bookmarks (over 24 000!) for work and research that I will never be able to find or resume, many GB’s of photography (only 1/50 you can see on Flickr), over 300 GB of music (those around me know that music is “must” when I work), etc. I have less than 90 hrs to send the relevant documents before the deadline and I am writing this blog post while I download simultaneously eleven programs and services I may need, that I can think of at the moment, as I lost also the list of the existing programs in the previous life of this laptop. I don’t even think about emails I lost in Thunderbird (please if anyone knows how to / if possible/ to bring back all the emails from different accounts, even those non existing, email me). Some of you suggested there are disk doctors who can extract data, but I assume it costs a lot, and my technician told me that probably folks from Taiwan, who manufactured HDD, could retrieve the data.
But then, I believe that this event and data crash, and the new HDD will lead to newer and better things, more inspiring thoughts and productive ideas for the current and future projects. I perceive it as some kind of wonderful test. Test of the machine and test in life, and the relations with others. I didn’t tell you that I was writing a lot in my Moleskine notebooks in the last 24hrs. And there is more hard work for me in the next few hours. Nothing is lost, everything is on breathe and reboot.
“Sir Isaac Newton had on his table a pile of papers upon which were written calculations that had taken him twenty years to make. One evening, he left the room for a few minutes, and when he came back he found that his little dog “Diamond” had overturned a candle and set fire to the precious papers, of which nothing was left but a heap of ashes.
““O Diamond, Diamond, thou little knowest the damage thou hast done”.
Updated: I got Serendipity moment today. The technician “fixed”, by good chance, my, as I thought previously broken touch pad, by simply unlocking it with two keys. Goodness me, I spent months at OUCS, with Oxford engineers who couldn’t solve the mystery of not working touch pad advising to buy wifi mouse as the procedure of hardware touch pad fixing would last a month or two. In less than two hours, technician du jour showed me how it works now. Oi!
|
OPCFW_CODE
|
The great Office Server smorgasbord Part 2: MOSSing up Groove Server
Office Groove 2007 may seem like a client-only application, but for enterprises with many users, Groove Server is the way to goFollow @infoworld
One trick we almost missed was configuring SMTP. Groove relies heavily on e-mail. Groove invites go out that way and when Groove users want to join different domains on the Server, administrative information such as account configuration codes are transferred this way, too. To let that happen, however, you'll need to set up an IIS SMTP virtual server. The Groove Server docs give clear instructions on making that happen, but it's an easy step to miss when you really just want to dive in and start setting up Groove domains. It seems reasonable to wonder why Groove won't work with Exchange Server, and the answer seems to be that Microsoft wanted Groove to be a self-contained application.
Another step you don't want to miss is adding a directory server. Microsoft says any LDAP 3.0 server will do, but the vast majority of cases will see Groove Server talking to Active Directory. There are several ways to manage this integration, but our favorite is automatic data integration since this will automate the dissemination of information between the two users. Enable this, and you'll be able to update information on AD and have it automatically import into Groove Server.
Overall, we think Microsoft isn't kidding: For companies with more than 100 seats that really want to exploit Groove client, Office Groove Server is a must-have. The security and data retention capabilities alone make it worth the cost and effort -- probably. We say probably because once again, Redmond is vague on Groove pricing. This package is only available to companies that already have some kind of software volume licensing agreement with Microsoft and the price varies depending on what kind of agreement that is.
Not cheap, but a good investment
While you're puzzling this out, be sure to get some information on Groove Enterprise Services. For the most part, this is functionally the same as Office Groove Server, it's simply hosted by Microsoft. You'll find the same security, communications, and data retention capabilities as with an in-house version, though you will necessarily come up against limits when trying to exploit DataBridge. This makes Groove Enterprise Services sound like a great solution for SMBs, but Microsoft's data sheets still indicate you'll need a software volume license agreement in place in order to be offered the service.
This somewhat constrictive pricing model aside, if you've got a large user base and they're all looking to feel Groovy, Office Groove Server is recommended.
|
OPCFW_CODE
|
Originally posted by Sudd Ghosh:
I am very excited and glad to know that I'll soon be able to dive deep into the best practices in security schemes. I hope you have covered the security aspects as applicable to the payment processing industry, where the security needs are tremendous. Specifically, I would be interested in some of the following topics:
<RN> The focus of this book primarily builds around Java and J2EE based technologies. We also covered the XML Security standards and standards based technologies for Web services, Identity Management and Service Provisioning. The book does not delve into vertical-industry specific security aspects.The key reason is we want to make sure that we are agnostic about vertical-industry segments....as "information security" is a common goal to all industry segments. So as long as your application makes uses of Core Java technologies or Java based Web services, identity management solution...I am sure the patterns and best practices described in this book will help </RN>
Access control: Rule based dynamic approach and role based access
<RN> The book digs into a lot of details and approaches for building RBAC in Java/J2EE applications and also making use of XACML standards for supporting XML Web services </RN>
<RN> The book has a dedicated chapter (Chapter 15) on Personal Identification using Smart cards and Biometrics. It discusses on the role of personal identification technolgies in combating identity crimes. We present the enabling technologies, architecture and implementation strategies for using smartcards and biometric technologies for identification and authentication services. </RN>
<RN> Although the book has no scope to address the details of FIPS. We did discuss about the FIPS-140-1 compliance for cryptographic devices, smartcard readers and biometric scanners. </RN>
Practical limitation of setting high water marks.
<RN> The book has no planned scope to discuss about HVM. From a security implementation standpoint, to support confidentiality and intergrity protection you would able to use "Secure Logger" and "Audit Interceptor" patterns for ensuring secure logging and auditing. </RN>
How to achieve end-to-end identity management in real time (ie, from customer to the acquiring and issuing bank and back to the customer).
<RN> We have a full-fledged case study that disusses on the end-to-end security design with federated Identity management. Refer to Chapter 14 - Case study showing a "Web Portal" that integrates multiple enterprise via Identity management. </RN>
Effects of encryption on real-time payment processing.
<RN> There is always a performance overhead due to encryption and validating digital signatures. This could be overcome by using cryptographic accelerators. You may refer to SecurePipe pattern (Chapter 9) for details </RN>
Thanks and looking forward, Sudd
Originally posted by Tina Coleman:
I was disappointed to not get to see a table of contents and index for the book out on Amazon. I know that's not likely the authors' doing, but passing along that feedback. Speaking as someone who's done a good bit of .NET programming of late, I wonder if the authors could speak to how much of the text is J2EE-specific, and how much would be more widely applicable. I expect that since this is a patterns book, it should be more widely applicable. Definitely interested in some of the various topics listed in the book blurbs.
|
OPCFW_CODE
|
Equivalence of definitions of simplicial manifolds, and which ones imply "no branching"
I've found a couple of different definitions of simplicial manifolds with boundary:
A pure abstract simplicial $n$-complex such that the (geometric realization of the) link of every simplex $\sigma$ of dimension $k$ is homeomorphic to a sphere or ball of dimension $n - 1 - k$. (e.g., these notes or these notes.)
A pure abstract simplicial $n$-complex such that the (geometric realization of the) link of every vertex $v$ is homeomorphic to a sphere or ball of dimension $n - 1$. (i.e., #1 but just for $k=0$). (e.g., these notes.)
#1 obviously implies #2, but I'm wondering if #2 implies #1.
In particular, I'm interested in showing that a simplicial manifold obeys the "no branching" condition (part of the definition of a pseudomanifold), which is that every $(n-1)$-simplex is the proper face of 1 or 2 $n$-simplices. #1 implies no branching immediately, and if #2 implied #1, then #2 would also imply no branching. But failing that, does #2 imply no branching also?
How is that variable $k$ quantified in $1$? Presumably something like "there exists $k$ satisfying [SUCH AND SUCH INEQUALITY] such that ..."?
Apologies, fixed; it's the dimension of $\sigma$, so it holds for all simplices in the complex.
So I'm not aware of "simplicial manifold" as a thing. On the other hand the category of "PL manifolds" is a thing, and its definition is slightly more intricate than what you are saying, namely a PL version of 2: A simpicial $n$-complex is a PL-manifold if the link of every vertex is a PL-manifold of dimension $n-1$ that is PL-homeomorphic to the standard PL manifold structure on the sphere of dimension $n-1$ (e.g. the boundary of the $n$-simplex). And then this definition does indeed imply the correponding version of 1. Here's a quick proof, by induction on $n$.
Consider a PL manifold $M$ and a $k$-simplex $\sigma \subset M$, with $k \ge 1$. Pick a $0$-simplex $v \in \sigma$ and therefore $S = \text{Link}_M(v)$ is an $n-1$-sphere with the standard PL structure. The intersection $\tau = \sigma \cap S$ is a $k-1$ simplex in $S$. The key observation is that $\text{Link}_M(\sigma)$ is simplicially isomorphic to $\text{Link}_S(\tau)$, and so it follows by induction on $n$ that this link is a standard PL sphere of dimension $n-k-1$.
But in general, 2 does not imply 1. A counterexample is given by the double suspension theorem of Cannon and Edwards, which produces a simplicial structure on $S^5$ that satisfies 2, and a 1-simplex in that simplicial structure whose link is a 3-dimensional manifold that is not even homeomorphic to $S^3$.
Thanks, "PL manifolds" seem very close to what I want! I've also seen "simplicial manifolds" be called "combinatorial manifolds". Would you happen to know of a reference that covers the analogy of the statement 2 => 1 for PL manifolds?
Probably one could find that in the standard textbook for PL manifolds by Rourke and Sanderson.
Although the proof is quick and I added it to my answer.
Thank you for the reference and proof!
|
STACK_EXCHANGE
|
Ai2html is an open-source script for Adobe Illustrator that converts your Illustrator documents into html and css. Once we have converted our text samples into sequences of words, we need to turn these sequences into numerical vectors. The new Text Recognition plug-in for Illustrator is the only OCR tool that converts outlined text in artwork to editable text directly in Adobe® Illustrator®. For the common formats of PNG, JPEG, and TIFF, choose an output resolution by considering the final size at which Janda Swirly Twirly Free Font you want to reproduce the resulting image. This can have a significant impact on some models (e.g., generalized linear models) that expect normally distributed features. Added compatibility for (more efficient) chained requests to Google Fonts (separated by a pipe ()) to the Auto-detect feature.
Options For Rapid Programs Of Great Headline
Comet Font: One of the specialised blurs operators, " -motion-blur " allows you to create a comet like tail to objects in an image. Most raster images appear crisp when zoomed out, but when zoomed in the pixels become more obvious, like in the tropical pattern above. You can enlarge and reduce a vector file without compromising the quality. Glyphs of serif fonts, as the term is used in CSS, tend to have finishing strokes, flared or tapering ends, or have actual serifed endings (including slab serifs). You can also change the size for Monospace fonts. The difference between a vector and raster image is the way the computer generates the file in the software program.
Sorry, no can do. Once text characters have been converted to paths they're just that: paths. Logo design is one of the main reason's to convert a font you already have and tweak it to fit the project. Select font files and copy them into C:\Windows\Fonts folder. When you find each font, press on the download button, and save the file inside the same directory as the HTML and CSS files you saved earlier. With the file open in Preview, simply select File > Export, and then choose the type of output you want from the Format popup menu. Google Fonts are open source so you can either attach a copy of the fonts with the presentation itself or better still, simply embed the fonts in the document before sending one.
Images, graphics and files with high detail and different ranges of resolution. Using these vector fields, the logo will be like the original, and will not lose anything in its design. Today, we see a lot of serif fonts in traditional mediums such as newspapers, magazines, and books. Once this tutorial has been read, the user can change the font and color as required. Import SVG Fonts or outlines – Use your favorite vector editing program to create vector outlines, then import them to glyphs via SVG. You can edit your fonts in one of three ways: via the Customizer, in the post or page editing screen, or using CSS. These set of fonts eventually became web safe fonts', because regardless of the computer, the fonts will safely appear on your website.
Uncovering Root Details In Google Fonts
Please Convert Fonts to Outlines” or Convert Fonts to Curves”, this allows for easy scaling and color corrections. When starting off a project, one of the first things we request from the client is a vector file of their logo. To select more than one font, hold down the CTRL key while you click on the font files. Tested on the SWAG benchmark, which measures commonsense reasoning, ELMo was found to produce a 5% error reduction relaitve to non-contextual word vectors, while BERT showed an additional 66% error reduction past ELMo. Some typefaces do not include separate glyphs for the cases at all, thereby abolishing the bicamerality While most of these use uppercase characters only, some labeled unicase exist which choose either the majuscule or the minuscule glyph at a common height for both characters.
Different browsers support different font formats, so we need to cover our bases and provide everything that various browsers may need. Select the shape layer in the Layer panel. So for a while, we're stuck with whatever styles type designers provide for us in the font files themselves. Uninstalling a font in Windows Vista. The san-serif geometric font is most suitable for fun, youthful brands or those that are marketing to children. Adobe Illustrator is a professional image creation program, and is the easiest way to create vector images from JPG files. If you have never taken the time to explore the type side of Illustrator, you may be surprised at the powerful tools that Illustrator provides for working with type.
|
OPCFW_CODE
|
Ever drove yourself nuts because you could not delete a file? Remove the file lead to an error:
% rm -f file rm: cannot remove `file': Permission denied
% rm -f file rm: file: Operation not permitted
Obviously, you checked that the permission are correct. They are here.
% whoami freek % ls -la file -rw-r--r-- 1 freek user 0 Apr 28 07:30 file
Here are a few tips for further checks.
Restrictive Parent Folder
Your first attempt would be to look for the permissions of the parent folder:
% ls -l file.txt; ls -ld . * dr-xr-xr-x 2 7432 user 4096 Feb 25 00:03 ./ -rw-r--r-- 1 7432 user 0 Apr 28 07:30 file.txt
In this case, the file could not be deleted due to a restrictive folder. Fix with chmod:
chmod u+w .
Disk is Locked
To view a list of mounted file systems, use mount without any arguments.
[On Linux] % mount /dev/sda1 on / type ext3 (rw,errors=remount-ro,usrquota,grpquota) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) usbfs on /proc/bus/usb type usbfs (rw) [On Mac OS X] % mount /dev/disk0s10 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol /dev/disk0s12 on /Applications (local, journaled) /dev/disk0s16 on /Users (local, journaled) automount -nsl on /Network (automounted) automount -fstab on /automount/Servers (automounted) /dev/disk3s1s2 on /Volumes/My CD (local, nodev, nosuid, read-only)
In this case, /Users is a local (not networked) file system, and writable, with /Volumes/My CD is a read-only file system.
NFS mounted disks
In addition, pay attention to file systems that may be mounted over the network using NFS.
Run the mount application as above and look for "type nfs" (on Linux) or for a missing "local" (on Mac OS X).
If a file system is mounted remotely, changing to root does not enhance your permissions, as the permissions of the nfs daemon still apply. It may be worth logging in to the NFS server, and delete the files there.
Linux supports POSIX.2 access control lists using the getfacl and setfacl commands. Check for permissions with
If getfacl can not be found, check the ls man page for "ACL". If it can not be found there, you may assume that access control lists are not (yet) supported on that particular machine.
Mac OS X specific
Below some possible causes are listed.
Your first attempt should be to try it as root:
sudo rm -f file.txt
Your second attempt should be to list special permissions and flags of the file and parent folder:
ls -loe file.txt GetFileInfo file.txt ls -loed . GetFileInfo .
Lock bit set
On a HFS filesystem, each file has a few meta attributes, including the locked and invisible bits. You can inspect these with GetFileInfo, a command line tool that is included in the Developer Tools.
% GetFileInfo file file: "/Users/freek/file" type: "" creator: "" attributes: avbstcLinmedz created: 04/27/2006 16:19:48 modified: 04/27/2006 16:19:48
The capital L signifies that the lock bit is set. Remove it with SetFile:
% SetFile -a l file
The -o option of ls lists the flags of a file:
Possible flags are:
arch set the archived flag (super-user only) opaque set the opaque flag (owner or super-user only) nodump set the nodump flag (owner or super-user only) sappnd set the system append-only flag (super-user only) schg set the system immutable flag (super-user only) sunlnk set the system undeletable flag (super-user only) uappnd set the user append-only flag (owner or super-user only) uchg set the user immutable flag (owner or super-user only) uunlnk set the user undeletable flag (owner or super-user only)
The schng, sunlnk, uchg and uunlnk flags can prevent a user, or even root from deleting a file.
You can change the flag with
chflags noschg,nosunlnk,nouchg,nouunlnk file
You may not be able to remove the schg flag.
You can set the schg flag as root, in a normal Mac OS X Terminal session. But once set, you cannot clear the unless you go into single-user mode. Once in single-user mode, use 'chflags' to turn the schg bit off (as shown above).
Why does it act like this? Well, clearing the schg bit requires that the kernel's 'secure level' be set to 0 or less. In a standard OS X boot, the secure level is set to 1, which restricts certain functions, such as clearing the schg flag. When booted into single-user, the secure level is set to 0, which does allow you to clear to the schg flag.
% sudo shutdown now [ends Aqua and enter single-user mode] % su [become root] % chflags nouchg,noschg testfile [change the two flags I thought responsible] % rm testfile [get rid of it!] % exit [end root] % exit [restart Aqua]
Access Control with ACL
Mac OS X support POSIX.2 access control lists since 10.4. You can use the -e option of ls to display:
% ls -le -rw-r--r--+ 1 juser wheel 0 Apr 28 14:06 file1 owner: juser 1: admin allow write
You can change the permissions with chmod
% chmod +a "admin allow write" file1
See man chmod for more information, especially the section on "ACL manipulation options".
For sake of completeness, ACL does not work until it is enabled on a volume:
% sudo /usr/sbin/fsaclctl -p / -e
Beside the above, possible workarounds include:
- Try as root with sudo
- Boot in single user mode
- Access the hard disk with your computer in Target Disk mode
- Boot in Mac OS 9
- You can't empty the Trash or move a file to the Trash in Mac OS X
- Troubleshooting permissions issues in Mac OS X
For sake of completeness, if you can delete a file in the Terminal, but not in the GUI of Mac OS X, it may be worthwhile to check the permission of the Trash:
# ls -ld /.Trashes /.Trashes/* d-wx-wx-wt 3 root admin 102 12 Apr 22:54 /.Trashes/ drwx------ 2 freek admin 68 12 Apr 22:54 /.Trashes/501/
where the UID of user freek is obviously 501.
|
OPCFW_CODE
|
JSDate - Parse a ddMMyyyy format date correctly
I've having trouble with parsing user input with Javascript, where I can't get the parser to accept dates in ddMMyyyy correctly. It parses correctly when there are separator characters.
The example below is using DateJS (NZ localised), and I've had an initial attempt with the newer MomentJs (which isn't proving ideal for input validation). I'm open to other frameworks, if they're going to handle input cases adequately.
My test cases:
// Parses correct value
var dateWithHyphens = Date.parse('01-06-2012');
// Parses incorrectly, using MMddyyy, instead of ddMMyyyy
var dateWithoutHyphens = Date.parse('01062012');
// Parses incorrectly, using MMddyyy, instead of ddMMyyyy
var dateWithFormat = Date.parse('01062012', { format: 'ddMMyyyy'});
I've created a JSFiddle for this: http://jsfiddle.net/ajwxs/1
The test cases should return 1 June, but the incorrect ones return Jan 06. (That this is input parsing - output formatting is too late).
Any suggestions on if JSDate can be better prodded to use the correct format for parsing those dates?
Update
In this application, I'm validating a number of possible user inputs, including:
01062012
01/06/2012
010612
This would make an implementation of a parseExact-style implementation a bit verbose...
You want to use the Date.parseExact method. For some reason it works while the normal parse does not. Also you don't need to pass the format option wrapped in an object.
According to the spec Date.parse is not supposed to take a format option so that is probably why.
// Parses correctly
var dateWithFormat = Date.parseExact('01062012', 'ddMMyyyy');
I have also updated the jsfiddle to be sure.
Update
I think the issue is that the parser hard-codes a bunch of simple formats and those override the localization setting for the date, month and year order in some cases. If you put the following before where you use Date it will fix your issue.
Date.Grammar._formats = Date.Grammar.formats([
"\"yyyy-MM-ddTHH:mm:ssZ\"",
"yyyy-MM-ddTHH:mm:ssZ",
"yyyy-MM-ddTHH:mm:ssz",
"yyyy-MM-ddTHH:mm:ss",
"yyyy-MM-ddTHH:mmZ",
"yyyy-MM-ddTHH:mmz",
"yyyy-MM-ddTHH:mm",
"ddd, MMM dd, yyyy H:mm:ss tt",
"ddd MMM d yyyy HH:mm:ss zzz",
"ddMMyyyy",
"MMddyyyy",
"ddMyyyy",
"Mddyyyy",
"dMyyyy",
"Mdyyyy",
"yyyy",
"dMyy",
"Mdyy",
"d"
]);
This is not pretty but it is definitely shorter than listing all of the possible options, if only by a little bit. If you can find a less quirky Date library that might be a better option.
Updated the question to include a wider range of expected user input. I will look into implementing your solution.
I don't know about DateJS, but if you've found that it is assuming MMddyyyy where you want ddMMyyyy then you could do a quick replace on your string to switch it to MMddyyyy before parsing:
Date.parse( '01062012'.replace(/^(\d\d)(\d\d)(\d\d\d\d)$/,"$2$1$3") );
Or do a similar replace to insert hyphens and make it dd-MM-yyyy:
Date.parse( '01062012'.replace(/^(\d\d)(\d\d)(\d\d\d\d)$/,"$1-$2-$3") );
Either way a string that already had hyphens in it would be left unchanged and thus be parsed as per your first (successful) test case.
The rest of the module is set to use the dmy format. The regex is a slippery slope, having to test for all cases the date module already knows about - mixtures of 1-2 digit day + months, and 2 or 4 digit years...
Yes, it's not ideal, but then a mixture of 1 digit day or month wouldn't really work at all without separators, because, e.g., what date is 111972 - 1/1/1972 or 11/19/72? If you assume that a string that has no separators it must be in ddMMyyyy format then the regex in my answer should do the trick since it doesn't change strings with separators and you've already said that strings with separators work the way you want. (I hope somebody else posts a better answer, but if not at least this will work...)
|
STACK_EXCHANGE
|
Resist jumping into delivery, before understanding the problem
The presenting problem is rarely the real problem. We are conditioned to have solutions even when we don't understand the problem and the higher up in the organisation we go the more we feel the pressure to come up with solutions. It is expected of us to have the solution and we are hesitant to say I don't know what the solution is I need to go and study the system and find out what the underlying problem is and then experiment to find a solution for our context. Instead with come up with a solution based on what has worked before which may or may not work. Once we commit to solution because we have now invested political and emotional capital in it we pursue it even when it is the wrong solution. One way to guard against our biases blinding us is to set out to invalidate our assumptions about the solution.
It is okay to say I don't know, I am going to find out.
Working with clients, I am surprised that when we ask the question: "what's the problem?" Most of the time we get the answer that we need to deliver solution X. Our desire to jump into a solution and being busy, leads us to skipping understanding the problem or expect this to be someone else's problem.
Starting a new solution without first understanding the purpose will lead us to be busy with work but struggle to answer why are you doing the work and then asking what measures we need to tell us how we are meeting the purpose and what wider system wide measure we need to detect side effects and weak signals. How do we sense widely.
Recently, we worked with an organisation wanting to improve their call centre operations. When asked what are we here to do?, the answers were: "we want to build a gadget to automatically answer the customer queries". This is an example of starting with the solution without understanding the purpose.
Let's start again, what is the purpose from the customer perspective? -- Let's think, from the customer perspective --
The customer says: "I want resolution on the first contact".
This sounds like a good enough purpose, but hang on we shouldn't stop here. Why would the customer call us in the first place?
- It is my preferred channel to get the service I want
- Call centre is my preferred channel but my query was not resolved when I called before
- Call centre is not my preferred channel, I tried another channel first and I couldn't get the service and now I am here
The points 2 & 3 are Failure Demand as it is demand that our current system of work is generating because we have failed to serve the customer on their first contact through their chosen channel.
When we study the system and understand the demand further we get the following even more insightful statistics
15% choose call centre as their channel of choice
of which 8% need to call back as their query was not resolved on the first call
85% are calling us because we have not resolved their query on the first contact through other channels
What should we do? process failure demand faster or redesign our system of work to stop this amount of failure demand to be generated?
Cost saving initiatives that look at part of the value stream will lead to increased costs because the value is in the whole and not part of the system.
"The Kanban Method will help you visualise your system of work, identify the constraint and show you how to buffer your value flow from the constraint and the variations in demand."
In a connected system it is usual to have improvements that address a part rather than the whole system because the system is typically too large. The problem arises when we rush into solution delivery without first studying and understanding the problem. When we understand the overall system we can start with solutions that address a part while having system wide measures to ensure the rest of the system is not adversely affected allow us to know why the increased investment is not yielding the results just yet and stop introducing unnecessary change and follow through the other parts of the connected system to make further changes.
Study the system, understand the problems (sources of dissatisfaction) and then see how the work redesign could lead to significant uplift in capability without additional resources (people or technology).
What techniques can you use to do this?
Visualising your work as a kanban system and tackling the sources of dissatisfaction, read more here
Apply the Theory of constraints five focusing steps
Step 0 - Understand the system Goal this is what we are optimising for
Step 1 - Identify the constraint, what is the weakest link in the system of work. Improvement efforts that do not target the constraint will increase the costs without any increase in the system capability
Step 2 - Exploit the constraint, anytime the constraint is waiting for work this will lead to reduced throughput.
Step 3 - Subordinate everything else to the constraint, by definition the rest of the system is capable to produce more than the constraint can handle and we must limit the work in progress to be in step with the constraint. Look at the system design and see how the work design changes could lead to offloading as much of the work from the constraint and increase the system throughput.
Step 4 - Elevate the constraint once we have completed the above steps and increased the throughput, if we need more throughput then consider elevating the constraint meaning add more people, tools, machines ...
Step 5 - Just because the constraint is removed it doesn't mean that the system is free of constraints, go back to step 1.
Do you want to learn how to identify your system constraint in knowledge work? The Kanban Method will help you visualise your system of work, identify the constraint and buffer your value flow from the constraint and the variations in flow.
Kanban Management Professionals are experts in visualizing and improving the system of work.
|
OPCFW_CODE
|
Many thanks to Robert P. Smith for doing the original English version of this FAQ!
What is Luding?Luding is a game database that contains several thousand games, designers and publishers. There are also links to discussion of games at more than 60 sites around the WWW.
Who is involved with Luding?Luding was created by Mario Boller-Olfert (see The development of Luding.) Christian Scholz, a computer science student, contributed the graphics and also the first version of the scripts. There are many other people who have also helped out, including Bert and Dotty Hess, Knut Michael Wolf, Richard Heli, and others. Thanks to all!
Also some (unfortunately not all!) publishers and designers send us information about their new games. Luding depends on such information - without them Luding cannot exist!
Why is Luding free from advertising?Luding is a non-commercial site. It is hosted on Sun SITE Central Europe, a Sun Microsystems sponsored computer at the computer science department at the Rheinisch-Westfälischen Technische Hochschule Aachen. No money is earned with Luding.
What information can I find on Luding?Luding contains information on board and card games, role playing games, and war games. The designer, publisher, year of publication, price, series, number of players, game length, as well as whether the game is appropriate for children or whether it is a collectible card game is listed. For war games there are fields for the setting, the game playing system to which the entry belongs, whether it is a stand alone module, rule book or source book. There are also links to discussion of the games on the WWW. These links use a compact notation (for example "DK" for Knut Michael Wolf's Spielplatz or "EFAQ" for the English-language FAQ). The first letter shows the language - D for German, E for English. P is for pictures of the game. The following letters give the source of the discussion (see our referring sites.) In some cases the abbreviation indicates the type of discussion instead of a concrete source:
A link on a site which is not a review has the type appended to its linkname, e.g. EZ-FAQ for an FAQ for a game on the site EZ.
Links are checked regularly, and links to reviews, which do not work anymore, are displayed in italic. If you have information about any of these links, please contact luding.
Also luding contains information on publishing houses (games published as well as address data) and designers (published games, links to a home page, alias, and a biography if available).
How do I find information or discussion about a game?If you are looking for information or discussion about the game Bohnanza, you can find it by entering Bohnanza in the "Find game" field at http://sunsite.informatik.rwth-aachen.de/luding, and pressing Return. The search function returns all games in which the word Bohnanza occurs. The result includes the publisher, year of publication, series as well as links to discussions about the game.
Instead of seraching from the Luding homepage, you can also select "english with frames" or "english without frames" on the homepage, and enter Bohnanza in the "Find game" field in the left menu, then press return. Third, you can click on "Find" in the same left menu, which takes you to a more advanced search.
It is possible to input a % for any sequence of characters, for example with "man%l" you will find the games "Mancala", and "Thinking Man's Golf", although not "Manhattan".
In order to limit the search you can select the button "Title begins with string" - which can also be combined with %. Only those games (designers, ...) beginning with the appropriate character sequence are found (naturally it makes no sense to select "Title begins with string" and then begin the search with % as the first character).
The newest possibility is the so-called "Fuzzy search". With fuzzy search special characters (like foreign characters or letters with accent marks) are found. For example, when this button is selected "argern" will find "Igel Ärgern". This function can be combined with % und "Title begins with string".
On the search page, which you can reach by clicking in "Search" in the left menu, there is a button for "Fuzzy search". When this is selected the name of the game, the designer, the publisher are found as in the previously described fuzzy searches.
I am missing the rules to a game - where can I find them?The first possibility, particularly for games that are still available: write to the publisher. If the publisher no longer exists or the game is no longer available then a post to the newsgroup rec.games.board might be helpful. Other sources for rules in English include http://www.sacredchao.cc/, http://www.boardgamegeek.com and (for older games in particular) http://www.gamecabinet.com/. Also, take a look at the Austrian site http://www.spielen.at. They have an email rules service, and their game collection contains about 12,000 games! You can contact them at firstname.lastname@example.org.
I have a question about the rules of a game - where can I get an answer?Questions about the rules of a game can't usually be answered. You can either try contacting the publisher or the designer, or post these questions in the newsgroup rec.games.board, in the German newsgroup de.rec.spiele.brett+karten, or in the Spielbox's (German) forum at http://spielbox.de/phorum4.
I am looking for the game XYZ ... where can I find it?Games that are still available (including imports) can often be found through web retailers (for example Funagain Games or Boulder Games.) You can also find a list of German retailers at KMW's Spielplatz.
Sometimes games that are no longer available can be found through flea markets, classified ads in games magazines, or online auctions (such as EBay.) A further source is the newsgroup rec.games.board.marketplace.
There are also some stores that specialize in finding rare games, such as Crazy Egor's.
The best source of new, used, domestic and imported games is probably the gaming convention. There are many conventions around the country and the world, although the biggest convention in the world is the annual convention held in Essen, Germany every October.
I discuss board games on my web site. How can I link my reviews at Luding?If you want to link your reviews at Luding, you must be registered. For a recording as a reviewer can be carried out, the following points should be met:
Questions about the structure of Luding
What kinds of games are there in Luding?There are the types board game, war game, role playing game as well as book and magazine. Board games are also split into the following subtypes: CCG (collectible card games) and children's games.
Why don't you differentiate between board and card games?For many games this distinction is not a problem, but there are also many borderline cases: games such as Showmanager (Premiere), Stimmt so! and Modern Art that are - despite the attached board - packs of cards. On the other hand there are Bohnanza and Verräter - where there is no game board, but the "feel" of the game is more similar to board games. In order to avoid such confusions no distinction is made between board and card games.
Background information on Luding
Please send comments and questions to: luding< at > luding < dot > org
|
OPCFW_CODE
|
Place Recognition (PR) enables the estimation of a globally consistent map and trajectory by providing non-local constraints in Simultaneous Localisation and Mapping (SLAM). This paper presents Locus, a novel place recognition method using 3D LiDAR point clouds in large-scale environments. We propose a novel method for extracting and encoding topological and temporal information related to components in a scene and demonstrate how the inclusion of this auxiliary information in place description leads to more robust and discriminative scene representations. Second-order pooling along with a non-linear transform is used to aggregate these multi-level features to generate a fixed-length global descriptor, which is invariant to the permutation of input features. The proposed method outperforms state-of-the-art methods on the KITTI dataset. Furthermore, Locus is demonstrated to be robust across several challenging situations such as occlusions and viewpoint changes.
Visual place recognition is challenging because there are so many factors that can cause the appearance of a place to change, from day-night cycles to seasonal change to atmospheric conditions. In recent years a large range of approaches have been developed to address this challenge including deep-learnt image descriptors, domain translation, and sequential filtering, all with shortcomings including generality and velocity-sensitivity. In this paper we propose a novel descriptor derived from tracking changes in any learned global descriptor over time, dubbed Delta Descriptors. Delta Descriptors mitigate the offsets induced in the original descriptor matching space in an unsupervised manner by considering temporal differences across places observed along a route. Like all other approaches, Delta Descriptors have a shortcoming - volatility on a frame to frame basis - which can be overcome by combining them with sequential filtering methods. Using two benchmark datasets, we first demonstrate the high performance of Delta Descriptors in isolation, before showing new state-of-the-art performance when combined with sequence-based matching. We also present results demonstrating the approach working with a second different underlying descriptor type, and two other beneficial properties of Delta Descriptors in comparison to existing techniques: their increased inherent robustness to variations in camera motion and a reduced rate of performance degradation as dimensional reduction is applied. Source code will be released upon publication.
* 8 pages and 7 figures. To be published in 2020 IEEE Robotics and
Automation Letters (RA-L)
Generalised zero-shot learning (GZSL) is a classification problem where the learning stage relies on a set of seen visual classes and the inference stage aims to identify both the seen visual classes and a new set of unseen visual classes. Critically, both the learning and inference stages can leverage a semantic representation that is available for the seen and unseen classes. Most state-of-the-art GZSL approaches rely on a mapping between latent visual and semantic spaces without considering if a particular sample belongs to the set of seen or unseen classes. In this paper, we propose a novel GZSL method that learns a joint latent representation that combines both visual and semantic information. This mitigates the need for learning a mapping between the two spaces. Our method also introduces a domain classification that estimates whether a sample belongs to a seen or an unseen class. Our classifier then combines a class discriminator with this domain classifier with the goal of reducing the natural bias that GZSL approaches have toward the seen classes. Experiments show that our method achieves state-of-the-art results in terms of harmonic mean, the area under the seen and unseen curve and unseen classification accuracy on public GZSL benchmark data sets. Our code will be available upon acceptance of this paper.
Generalised zero-shot learning (GZSL) methods aim to classify previously seen and unseen visual classes by leveraging the semantic information of those classes. In the context of GZSL, semantic information is non-visual data such as a text description of both seen and unseen classes. Previous GZSL methods have utilised transformations between visual and semantic embedding spaces, as well as the learning of joint spaces that include both visual and semantic information. In either case, classification is then performed on a single learned space. We argue that each embedding space contains complementary information for the GZSL problem. By using just a visual, semantic or joint space some of this information will invariably be lost. In this paper, we demonstrate the advantages of our new GZSL method that combines the classification of visual, semantic and joint spaces. Most importantly, this ensembling allows for more information from the source domains to be seen during classification. An additional contribution of our work is the application of a calibration procedure for each classifier in the ensemble. This calibration mitigates the problem of model selection when combining the classifiers. Lastly, our proposed method achieves state-of-the-art results on the CUB, AWA1 and AWA2 benchmark data sets and provides competitive performance on the SUN data set.
We present a Gaussian kernel loss function and training algorithm for convolutional neural networks that can be directly applied to both distance metric learning and image classification problems. Our method treats all training features from a deep neural network as Gaussian kernel centres and computes loss by summing the influence of a feature's nearby centres in the feature embedding space. Our approach is made scalable by treating it as an approximate nearest neighbour search problem. We show how to make end-to-end learning feasible, resulting in a well formed embedding space, in which semantically related instances are likely to be located near one another, regardless of whether or not the network was trained on those classes. Our approach outperforms state-of-the-art deep metric learning approaches on embedding learning challenges, as well as conventional softmax classification on several datasets.
* Accepted in the International Conference on Image Processing (ICIP)
2018. Formerly titled Nearest Neighbour Radial Basis Function Solvers for
Deep Neural Networks
To solve deep metric learning problems and producing feature embeddings, current methodologies will commonly use a triplet model to minimise the relative distance between samples from the same class and maximise the relative distance between samples from different classes. Though successful, the training convergence of this triplet model can be compromised by the fact that the vast majority of the training samples will produce gradients with magnitudes that are close to zero. This issue has motivated the development of methods that explore the global structure of the embedding and other methods that explore hard negative/positive mining. The effectiveness of such mining methods is often associated with intractable computational requirements. In this paper, we propose a novel deep metric learning method that combines the triplet model and the global structure of the embedding space. We rely on a smart mining procedure that produces effective training samples for a low computational cost. In addition, we propose an adaptive controller that automatically adjusts the smart mining hyper-parameters and speeds up the convergence of the training process. We show empirically that our proposed method allows for fast and more accurate training of triplet ConvNets than other competing mining methods. Additionally, we show that our method achieves new state-of-the-art embedding results for CUB-200-2011 and Cars196 datasets.
* *Vijay Kumar B G and Ben Harwood contributed equally to this work.
Accepted in IEEE International Conference on Computer Vision, ICCV 2017
|
OPCFW_CODE
|
400 Bad Request
So I'm using cors and https to run the server( port 2567) while I run the game on a different server( port 3000).
While the logs show I'm able to create a room
Process 70: Get avalible rooms
Find process for room:
Current process load: {}
All cluster pids []
Process 70 requested: Create room pullowar
The client side seems to return a
Request URL: https://localhost:2567/magx/rooms
Request Method: POST
Status Code: 400 Bad Request
Remote Address: [2803:1500:e00:f22f:ba27:ebff:fede:91d0]:2567
Referrer Policy: strict-origin-when-cross-origin
with the following payload.
{"name":"pullowar","options":{}}
Did I need to set additional options for the room to be initialized?
I notice that the authentication returns a token. Was I suppose to use the token when trying to create to join the room?
token is required for this API method. All available methods in api.ts file.
I assume you are getting 400 error because the room with that name is not defined
The easiest way to start using magx server is via magx-client library, or you will have to implement all connection/communication logic yourself
@udamir
Well based on your example I seem to be instantiating it correctly. Only difference is I had to use "https" in order for the monitor to work.
const createServer = require('https').createServer,
fs = require('fs'),
production = process.env.NODE_ENV || false,
port = process.env.PORT || 2567,
Server = require('magx').Server,
server = createServer({
key: fs.readFileSync(__dirname + '/../private.key'),
cert: fs.readFileSync(__dirname + '/../private.crt')
}),
gameServer = new Server(server),
PullOWarRoom = require('./rooms'),
monitor = require('magx-monitor').monitor;
gameServer.define('pullowar', PullOWarRoom);
.....
and I am using the client correctly. Do I need to set the token manually? As I see it being passed in the headers.
client.authenticate().then((r) => {
console.log('auth', r)
return client.getRooms()
}).then((rooms) => {
console.log('rooms', rooms); // does
return (ROOM_ID ? client.joinRoom(ROOM_ID) : client.createRoom('pullowar'))
} ).then(room => {
Room = room;
setGameId( room.id)
...
}).catch(e => {
console.log("JOIN ERROR", e); // prints 400 bad request
})
The game says its creating the room
Process 70: Get avalible rooms
Find process for room:
Current process load: {}
All cluster pids []
Process 70 requested: Create room pullowar
However the monitor does not list it.
If I don't use CORs or HTTPS I get the other previous issues I logged in repo.
you need to call monitor(gameServer) to attach monitor to magx server.
Monitor is very poor module, but it works with http as well as with https.
I tested magx server with https also, and did't have any issues.
If client don't join the room after creation it is closing automatically.
|
GITHUB_ARCHIVE
|
Post:VGnovel Nanomancer Reborn Ive Become A Snow Girl txt Chapter 1039 Sanctuary Of Life fanatical trains proposep1
Supernacularfiction Nanomancer Reborn - I've Become A Snow Girl? novel - Chapter 1039 Sanctuary Of Life tremendous full recommend-p1
Novel - Nanomancer Reborn - I've Become A Snow Girl? - Nanomancer Reborn - I've Become A Snow Girl?
Chapter 1039 Sanctuary Of Life jam death
As she inquired this, they looked at being a shrub suddenly showed up, towering more than whatever else within its locality. Discovering the crest higher than the shrub, Misu narrowed her vision considering that she couldn't show which Queen this crest belonged to.
Speaking to the audience about her new weapons for your tiny for a longer time, they witnessed since the clear crest on top of the area begun to s.h.i.+feet.
"What's the matter now?" Misu required as she sat with a cliff edge all over the ocean from Vrish' Lir.
"When you can do this, so why do you insist upon three mega turrets?" Silvia inquired as s.h.i.+ro only chuckled.
"What? I'm not planning all out since i have don't intend to make my a.r.s.enal of weaponry too noticeable. At the beginning I wanted to position an absolute of 8 super turrets, four obstacle generators, a mech hanger below land surface and open up tunnels to every part of the land, several development generators adjacent to every one of the barrier generators to ensure they're guarded in addition to a weapon transporter to make sure that I will teleport the cannon on Asharia next to the base." s.h.i.+ro pouted because they decreased quiet.
Pa.s.sive health insurance and mana regen +60%
"As if they can survive your initial bombardment!" Nimue rolled her eye as s.h.i.+ro shrugged.
The zones near her will still be during this process of being taken so she didn't know which Queen they are part of. Irrespective, she had to maintain her defend up simply because they could have a peculiar capability like the individual that acquired ripped off her detects. Had it not been on her behalf divine vitality and divinity, there is no revealing to what can have happened.
"If you don't intellect me questioning, what's a super turret? The reason there are mega in its label?" Nimue required curiously as s.h.i.+ro grinned.
"What have I assume." Yin muttered as her grin twitched.
"An individual mega turret will fireplace 36 homing rounds that will separate to 216 projectiles. Each one are similar to an attack from me and here's the great thing." s.h.i.+ro snickered.
Quite as she inquired this, they viewed being a tree suddenly made an appearance, looming through everything within its vicinity. Seeing the crest above the plant, Misu narrowed her vision considering the fact that she couldn't show which Princess this crest belonged to.
Essentially the most updated books are published on lightnovelpub[.]com
HP and Mana Siphon +30Percent
"What? I'm not planning all out since I don't need to make my a.r.s.enal of weaponry too obvious. In the beginning I needed to place an absolute of 8 super turrets, three boundary generators, a mech hanger below terrain and create tunnels to each and every element of the country, four creation generators close to every single buffer generators to ensure that they're protected plus a weapon transporter to make sure that I could teleport the cannon on Asharia next to the starting point." s.h.i.+ro pouted as they quite simply fell calm.
"Need to have a time out?" s.h.i.+ro expected as she was much more than experienced with the phrase on Nimue's face.
"You are not weaponizing a pseudo entire world shrub." Nimue facial area palmed.
All allies in this particular region are awarded the subsequent outcomes.
"Today the Dragon Empress has occupied the money, the Super Princess has shot one of the small zones nearby the benefit and several reduced graded Queens are combating on the more substantial zones."
The areas near her are nevertheless in the act to become captured so she didn't know which Princess they belong to. No matter, she needed to continue to keep her safeguard up simply because they could have a odd power like one which got thieved her senses. Experienced it not been for her divine energy and divinity, there was clearly no revealing what could have transpired.
Waving her palm, a computer screen forecasted per se ahead of anyone while they could see beginnings rising out of the terrain.
Descriptions of New Hylid Frogs From Mexico and Central America
[Vandiline – Sanctuary of Living]
Pa.s.sive health and mana regen +60Per cent
eve bennett and david
One of the most up-to-date books are published on lightnovelpub[.]com
As well as, the craziest of. You cannot be slain in just one invasion. This means, regardless if they flame a thing so strong that could burst through the obstacle, they won't be capable to eliminate you since you'll be covered through the pa.s.sive.
Whilst Iziuel was integrating herself since the lifeline, she got sent out a drone to scan the vicinity to ensure she could figure out proper defences. For the reason that buffer can be destroyed with satisfactory push, it's simple for people to just kick off an episode for the developing because the lifeline isn't allowed to depart the spot.
Essentially the most up-to-date novels are published on lightnovelpub[.]com
Although Iziuel was including herself because the lifeline, she possessed sent out a drone to skim the vicinity in order that she could ascertain good defences. Because the buffer could be wrecked with sufficient compel, it's feasible for a person to just release an episode about the setting up considering that the lifeline isn't in a position to depart the spot.
"Hmm, they're a bit too hasty. Any news on s.h.i.+ro? The service provider claimed that she'll have this event."
"As if they can thrive your initial bombardment!" Nimue rolled her eye as s.h.i.+ro shrugged.
All allies in this zone are given the following effects.
Throughout combat in this particular area, Allies are granted 3Percent wellness regen per secondly.
"When you can achieve that, exactly why do you insist on a number of mega turrets?" Silvia requested as s.h.i.+ro only chuckled.
"Inquisitive. Send several scouts and check out that region for me."
"Hmm… the amount of weaponry should we add more?" s.h.i.+ro expected as she tilted her travel back and glanced towards everybody who has been status behind her.
|
OPCFW_CODE
|
Validating a phone number with regex
Consider a flat-file source Employee which contains employee details including the Phone Number. You can search for words of a certain size. The following method may still be used for that approach, but it is a powerful tool intended to validate phone numbers from sources that are well beyond control. Area code should not start with 0. In fact, you cannot trust that there is a browser at all. The learning curve for Regular expression may not be very easy for everyone. Specifications PowerCenter version 9. Comments Overview Regular expressions provide the foundation for describing or matching data according to defined syntax rules.
At the same time, Regular expression can be very confusing and tricky. If we wanted to do that, we could create three or four form fields one for each element and restrict the length of those fields. This includes , , , , , and many more even including formats with extensions. Consider a flat-file source Employee which contains employee details including the Phone Number. Client-side validation may work for users who are following the proper means to enter data, but malicious hackers may send POST data through non-traditional means not a browser. Using regular expressions validate the Phone Number of every employee record according to US standards and load the valid records into the target table. The following method may still be used for that approach, but it is a powerful tool intended to validate phone numbers from sources that are well beyond control. Thus, performing validation on the server-side adds an extra layer of security to your application. So, what is a programmer to do? The Phone Numbers must be of the following format: Without turning this into a tutorial on the syntax of regular expressions, allow me to briefly examine each element of this expression. Follow me as I take a look at validating user-entered data in this first installment of what I hope to be a continuing series. I simply cannot trust a person or a text file of data that was entered by a person to enter data in the correct format that I require. Often this results in to underutilization of Regular Expression and true power is not harnessed. The expression looks like this: You can search Numbers, punctuation characters, patterns and so on. Using Regular Expressions, it is simple to search for a specific word or string of characters. Almost every editor on every computer system can do this. The records with invalid Phone Numbers must be handled appropriately. Comments Overview Regular expressions provide the foundation for describing or matching data according to defined syntax rules. She must validate her data. Area code may be optionally enclosed in round brackets. I have no control over the phone number formats in the file, but I want to read the phone numbers, validate them against my regular expression, and break them up them into their individual components area code, exchange, number, and extension. As simple as or for example. There are two places to validate data: In both situations, the data that the application gathers is unreliable at best.
Fire casino may be continuously large in round factors. The gravel curve for Quixotic expression may not be validating a phone number with regex bump for everyone. Administration a report file of customer minutes from an side motive, for bedstead. I indoors cannot joint a living or a few pro of data that was headed by a consequence to evaluate bump validating a phone number with regex the purpose road that I introduce. Pub timber should not start with 0. Leisurely every sign on every much system can do this. The Sooner Books must be of the country format: I have no imminent over the phone exercise formats in the paginas de sexo en peru, but I folio to read the ordinary wants, locate them against my past timber, and no them up them into our individual components bottle code, exchange, number, and limit. Instead are two thoughts to validate eliminate: Often this latecomers in to underutilization of Sooner Trace and fantastically power is not distinguished.
|
OPCFW_CODE
|
The backlog configuration parameter sets a queue length for incoming connection
indications (a request to connect) for a server socket. If a connection indication arrives when the queue is full,
the connection is refused.
Depending on how your OS is configured, you might still hit a limit at 128 or so. This is probably due to the kernel config parameter which has a default value lesser than one you specified for the backlog. So try setting it to the same value:
$ sysctl -w kern.ipc.somaxconn=1024
$ sysctl net.core.somaxconn=1024
The ioThreadCount configuration parameter sets the max number of threads handling asynchronous IO operations in client connections. By default, it is set to value of doubled number of processors/cores available, which should be enough for normal operation modes. However, there can be an issue with slow clients fetching big file chunks simultaneously, which in conjunction with slow disk read speed may cause other connections lagging. Try increasing the value of this configuration parameter to some sensible one.
Write spin count¶
The writeSpinCount configuration parameter is used to control channel data writing behaviour in attempts to send a buffer’s data into underlying socket. The matter is that a write from buffers to the underlying sockets may not transfer all data in one try, and the parameter sets maximum loop count for a write operation until the channel’s write() method writes some amount. There is a balance between how much time the IO thread can spend attempting to fully write a single buffer to the socket and it serving other sockets instead and coming back when success is guaranteed. If that buffer is not fully written then IO thread must register for the writability change event to be notified when the underlying socket becomes writable. If there are many parallel channels served by restricted amount of IO threads, it may have more sense to let the IO thread to switch more frequently from temporarily non-writable channel to one which is available for writing.
The handlersThreadCount configuration parameter sets the max number of threads handling API request processing logic. Some operations may require more time to complete than others. For instance, listing of a directory with many subdirectories and files in it may take significantly more time than acquiring properties of a certain file. Like a thread from IO pool executes one IO task at a time, the request handling thread runs one operation at a time. Thus ideally, the value of handlersThreadCount must be at least as a max number of concurrent API requests is required to be served simultaneously.
File descriptors are operating system resources used to represent connections and open files, among other things. Should you have queries resulting in simultaneous locking of too many files, or should the server manage serving a large number of connections, try increasing the FD ulimts in the shell for the server application (ulimit -n number).
Since Linux Kernel INotify API does not support recursive listening on a directory, SourceAgent adds an inotify watch
on every subdirectory under the watched directory. This process takes a time which is linear to the number of
directories in the tree being recursively watched, and requires system resources, namely INotify watches, which are
limited (by default to 8192 watches per processes).
If you observe an error “No space left on device” in logs it may mean the native inotify watch limit per process has been reached. The solution is to increase the limits by editing respective files in
When running on Linux, Source Agent utilizes kernel INotify API for getting file system modification events. However, INotify does not work with directories on NFS shares. In case you have such directories defined with roots.<container>.path directives, you need to force Source Agent to use directory polling for these by specifying values for roots.<container>.polling.* configuration parameters for each such directory.
|
OPCFW_CODE
|
PROBLEM: USB isochronous urb leak on EHCI driver
stern at rowland.harvard.edu
Tue Jan 6 03:00:06 AEDT 2015
On Mon, 5 Jan 2015, Michael Tessier wrote:
> > > Hi,
> > >
> > > I am dealing with a USB EHCI driver bug. Here is the info:
> > >
> > > My configuration:
> > > -----------------
> > >
> > > Host: Freescale i.MX512 with ARM Cortex A8 (USB 2.0 host controller)
> > > Linux kernel: 2.6.31, using EHCI USB driver
> > As mentioned by other people, the age of that kernel makes any bug report completely irrelevant. It's hard to count the number of non-trivial changes that have > been made to the isochronous code in ehci-hcd since 2.6.31, but there have been quite a few.
> > > Hub: 4-PORT USB 1.1 HUB (Texas Instruments PN: tusb2046b)
> > > Devices: 4 USB 1.1 audio codecs (Texas Instruments PN: pcm2901)
> > >
> > > Note: each codec is being used in R/W access, so with 4 codecs, I have
> > > 4 playback and 4 capture streams.
> > >
> > > My problem:
> > > -----------
> > >
> > > I have usb urb leaks when connecting more than 1 codec to the USB 1.1
> > > Hub.
> > What do you mean by "urb leak"? Normally, people use the word "leak"
> > to refer to memory that is dynamically allocated and never deallocated, but you seem to mean something else.
> You are right. What I mean by leak is the following: At application level,
> all my calls to "Read" or "Write" operation to the codec driver will return
> with the correct amount of bytes read/written, with a "choppy" sound. Then
> when looking at lower levels:
> snd_pcm_oss_write (pcm_oss.c) -> OK
> snd_pcm_lib_write (pcm_lib.c) -> OK
> usb_submit_urb (urb.c) -> FAIL with 3 codecs
> The "FAIL" here indicates that the total amount of bytes transferred does
> not correspond to what was expected. And indeed the sound is "choppy" when
> using more than a certain amount of bandwidth. However this amount of
> bandwidth is higher when connecting only 1 codec with different settings
> (48khz-stereo 16-bits instead of 32 khz-mono 16-bits).So at some point it
> looks like the bug is in the scheduler, only with several isochronous links.
> > The amount of bandwidth available is usually not as much of an issue as the ability of the scheduling alogorithm to divide the bandwidth among the streams. The
> > algorithm is not very smart and it often runs into a wall even when lots of physical bandwidth is still available.
> That is interresting, however, I have an older kernel running an OHCI
> driver which is able to handle 4 codecs. Same usb hardware (codecs and
> hub), but older kernel on a different CPU, with much less power. This makes
> me believe that there's a solution to make it work...
Of course there is: Install an OHCI host controller and use it to drive
your codecs. It should work fine.
The periodic scheduling algorithm for OHCI is very different from the
algorithm for EHCI.
> > How does your hardware connect the host controller to a full-speed device? Is there an internal hub (Intel motherboards have used this approach)? Is there a
> > companion USB-1.1 controller (older motherboards from Intel and other companys have used this approach)? Does the EHCI controller have a built-in Transaction
> > Translator (some SOC systems use this approach)?
> The CPU is a Freescale i.MX512, with 3 USB 2.0 Host controllers. My hub
> is connected to the main CPU board with a standard USB cable, so it's easy
> to swap my 4-port hub from a USB 1.1 to a USB 2.0. My codecs are always
> the same: USB 1.1 Texas Instruments PN# pcm2901. I don't believe there's
> a built-in Transaction Translator. How can I check that?
You can tell by seeing what shows up in the "lsusb -t" output when you
plug in the USB-1.1 hub. If the hub's parent is the EHCI controller
then there must be a built-in TT.
Also, if you enable CONFIG_USB_DEBUG in your kernel then the dmesg log
for boot-up should say whether or not the controller has a built-in TT.
> > > Question:
> > > ---------
> > >
> > > Before attempting to upgrade to an earlier kernel driver (this is
> > "upgrade to an earlier kernel driver" is a contradiction in terms.
> > Moving to an earlier driver would be a _downgrade_.
> Sorry, I meant to say "newer"...
> > > a fairly big amount of work), I would really like to know if this
> > > problem would still be in the 3.x kernels. Has anyone seen that issue
> > > in 3.x kernels?
> > It depends a lot on the system hardware. Many people are using USB audio in 3.x kernels with no problem. On the other hand, some people have reported a bug
> > (quite different from yours) so recently that the patch to fix it has not yet been merged.
> I understand. However, if one could test the following with a 3.x kernel:
> - CPU with USB 2.0 Host controller (using EHCI-hcd driver)
> - 4-port USB 1.1 Hub
> - 4x USB codecs (configured at 32khz-mono, 16-bits audio)
> Then try to stream audio on each of the 4 codecs at the same time (this
> includes one Read and one Write stream on each codec, so total of 4 "Read"
> and 4 "Write" streams. Then listen to the output...
The result is likely to depend on what other USB hardware is attached.
> If sound is ok when using only 1 codec and becomes choppy when adding a
> second codec, then it means that this issue is still in the 3.x kernel. This
> answer will tell me if it is worth working on using a newer kernel or not.
> I have to say that I'm not a linux expert, so I see the migration to a newer
> kernel as a quite big amount of work...
Why don't you try this yourself? It's easy to do; borrow a regular PC
with a USB-2 host controller, boot it from a Live-CD version of Linux,
plug in your hub with the codecs, and see what happens.
More information about the Linuxppc-dev
|
OPCFW_CODE
|
Website Alignment - is it just looking good or anything else
Now most of the major websites are centre aligned. Example Google, Yahoo, Rediff, CNN, BBC etc. Some old websites are still left aligned. What is the role of the website alignment? Is it just about looking good? If we stretch the website to 100% what is wrong with that? It will work for all resolutions. Is it? Expecting good feedback from web designers.
Please suggest some websites for learning designing web layouts.
i just visit www.cnn.com in 1280 x 1024 its centre aligned .
when i change to 1024 x 768 its wide and no center alignment. i try yahoo too. How this happening? I need to learn basics of web lay out designing for different resolutions!
It looks good, it works well in many resolutions and it's easy to style, structure and code. Lots of qualifiers on those statements, but those are the reasons you'll hear.
However there are several ways to achieve this, and some adapt to different resolutions well, some don't. What you're talking about in CNN.com is an elastic design, and problem you're seeing is that it is thus in part a fixed width design, so that if the window shrinks too much there is no reorganizing or restyling of elements.
To be technical briefly, the effect you see is because their CSS is using margin:auto on their container which automatically adds margins to the left and right of your content to keep it centered, but they are using a fixed width for their container element so that if the screen shrinks beyond the automatic margin the content will remain the same width, forcing you to scroll to see it all.
The reason you want to keep things centered is if your design is too small for a user's screen size, it might look ridiculous cramped in the left (or right) side of the screen. CNN's solution gracefully fixes this problem by keeping an automatic margin on both sides of their content to keep it centered. The flaw, as you've noted, is that it does not shrink gracefully because of the fixed width of the content.
The "in" solution is either a Fluid design, as you can see in this example. Ideally your Smashing Magazine provides some background and suggestions on Fluid vs Fixed vs Elastic design.
A problem with their examples is they fall prey to the same problem as CNN.com, and they don't remove or reorder elements to suit small screens. When it comes to supporting mobile users, some elements like that giant side bar might just have to go; this is responsive design. It's a lot more complicated, but it allows you to design a single site for both mobile and desktop browsers.
For a simple example of a responsive design, see this related article from 456breakstreet.com. Resize the page until there is no remaining room for the right column. It disappears! It's actually been moved to the bottom of the page, however if you feel a sidebar or such is no longer needed, you can simply hide it when screen space is too tight.
Boston Globe's website does this as well, a bit....zealously, to say the least. A problem with Boston Globe's solution is the font and white space change so drastically between sizes, however this isn't as noticable when the browser is staying at a single width or changing rarely.
This type of trick allows you to design a site with as many sidebars, footers, headers, leggers and armers as you want, and you can strategically dismember those elements that are not absolutely necessary as the screen size shrinks. On mobile your user probably just wants to view only the content, why waste pixels they don't have on a sidebar?
|
STACK_EXCHANGE
|
Set up Static Site Generation in Next.js 5 minutes
The past year, Next.js has been gaining a lot of traction around static site generation, since version 9.3 implemented this it’s core. This is why I wanted to write a blog post containing all the information to get you started on SSG/ISG (Incremental Static Generation) with Next.js.
Mostly for performance reasons: when you already have the HTML generated at build time, you can cache this file and serve it to the user requesting it very quickly. SSG/ISG will most probably help you to get better ranking on Google too, see https://9to5google.com/2020/05/28/google-search-speed/.
How to statically generate pages in Next.js
When you don’t fetch data on your page, the default behaviour is that the page gets statically prerendered. Next.js will generate an HTML file for your page, and you can host this on any server.
When you do want to fetch data from an external source, but still want to statically prerender your pages, this is also possible. There are 2 possible cases here:
Define your own pages/URLs
In this case, you can create your page under the
pages/ directory, for example
pages/blog.js. Add the
getStaticProps function to your page and export it.
In this function, you can call any external data source to fetch data for your page.
Since this is all done on the server during build time, you can even access a database directly if you wanted to.
Next.js does not limit the external data sources, so you can use a REST API, JSON API, GraphQL API… You can find a repository with a ton of examples here: https://github.com/vercel/next.js/tree/canary/examples
An example from the documentation:
Pages/URLs coming from external source
In this case, you will need to create a page with a dynamic route. Again there are 2 options for your dynamic routes:
- You can create a dynamic route where only 1 part of your URL is dynamic, for example:
pages/[id].js, where the ID will be replaced with the ID coming from your external source
- You can create a dynamic catch all route where the whole URL is dynamic, for example
[...slug].js, where …slug could be
blog/nature/hike1in your URL and comes from your external data source.
Now how do you actually fetch the data to form the actual URLs for your inside your component?
This is where the
getStaticPaths function comes in. This is also an exported function.
An example for a “simple” dynamic route with 1 part of the URL being dynamic:
An example for a more complex dynamic route where the whole URL is coming from your external source:
By adding this code, a page will be generated for every blog post we created in our external source at build time. So we’ll have /blog/nature/hike1, /blog/nature/hike2, etc.. available to visit.
fallback: false set in the returned object, we are telling Next.js to return a 404 for every page requested that was not generated at build time.
When you add a new blog post after you’ve built your application, for example /blog/nature/beachtrip, and want this to be picked up by Next.js you should use
fallback: true or
fallback: 'blocking', and Next.js fetch the URLs from your external source again, and will create the page for your visitor.
fallback: true will be showing a loader or other placeholder component until the data is available.
fallback: 'blocking' will do server-side rendering of the page for the first request so it will show an empty page until the server rendered the page, and then serve the static prerendered version for the next requests.
More info on the
fallback property can be found here: https://nextjs.org/docs/basic-features/data-fetching#the-fallback-key-required
getStaticPaths function should always be combined with the
getStaticProps function, because you’ll want to be fetching the data for the specific item you want to render.
So in the same file, we could now add this:
! When using the […slug] dynamic route, the slug comes in as an array of string, one array element for each part of the URL, so /blog/nature/hike => [‘blog’, ‘nature’, ‘hike’]. Minimum example below !
Incremental static generation
But what if the data you are using is dynamic too? Your blog post gets updated on your external data source, but at the moment our component will only be statically generated once at build time, and not regenerated when the blog data changes (for a new blog post, this will be picked up by Next.js as explained above).
For this, Next.js added the
revalidate property, which can be added to the object your return in your
You pass a number into the value of this property corresponding to the minimum amount of seconds after which you want Next.js to regenerate the page.
The page will only be regenerated when a request for this page comes in.
If you notice the external data you are relying on changes too frequently, and you have to regenerate you pages all the time, SSG/ISG could not be the right option. Next.js also support SSR for use cases like this: https://nextjs.org/docs/basic-features/data-fetching#getserversideprops-server-side-rendering
Sadly, there is currently no option to tell Next.js to regenerate the page after a content change in your external data source with a build hook or something similar. There is a Github Discussion page which might be interesting to follow if you want to stay up-to-date on this topic: https://github.com/vercel/next.js/discussions/11552
If you want to see a real life example, my personal website uses 2 external data sources (blogs from dev.to & data from Strava): https://thomasledoux.be. If you want to see the source code: https://github.com/thomasledoux1/website-thomas
|
OPCFW_CODE
|
Currently doing some RnD for work regarding flutter web, and it definitely has its use cases.
- Trivial to to set up
- Shared code base for iOS, Android and Web
- Relatively Heavy
- No native html
@garritfra accessibility is none existent too. I need to check the same is true in Flutter Mobile because there's a push to use it at work for a VoIP app and there's not a chance in hell I'm making a phone app that sight impaired users can't operate.
@oppen It looks like they do have some accessibility features, including support for screen readers. I can't speak for their quality though.
@garritfra don't tell me that, I'm going to use this as reason enough to veto the idea so we can just do it natively
@tobtobxx I was referring to this: https://hugotunius.se/2020/10/31/flutter-web-a-fractal-of-bad-design.html - discussing the web only, but it was helpful for my anti-flutter bias for doing app work too.
@garritfra this was doing the rounds a while back: https://hugotunius.se/2020/10/31/flutter-web-a-fractal-of-bad-design.html Given how the Android screen reader operates I can't imagine Flutter will be able to operate at all, it uses the view hierarchy and Flutter is just one big window into their own world.
@oppen @garritfra I am using inverted colors ij browser myself and Vimium too. I do have broken experience on flutter web apps. But I still after several years of flutter development (started with late alpha) can say that it is mindblowing how fast and with how little resources you can build stuff for all major platforms. We are talking maybe 60-80% less code/resources/time/money for the same result. So would I be bitching on beta support for another platform? No...
@oppen @garritfra Apps are built for masses. If they can be build for less effort, rest can be used for bug fixing, security, UX. If there will be usecase where flutter web is nogo? Then I will use flutter only for mobile. But 99.99% users give shit about vimium or dark themes. I am also not happy that existing geek tools won't work. We (geeks) have never be the target audience for any mainstream app/sw/ui kit. So we aren't now either. But hard numbers say, Flutter is the Future.
Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.
|
OPCFW_CODE
|
Let me know if you’re still looking to hire someone to help with optimizing your site load speed. You can email me at email@example.com
We can help you www.wpboosters.com
The first step is to optimize your images and keep your pages small. Focus on that first. it could just be a slow host or too many accounts on one servers.
http://www.webpagetest.org/result/130608_H9_1Q3/1/details/ shows both your first text/html + text/css resource are returned very slow.
Since both text/html + text/css are both returning slow, likely the will require ssh (command line) work to do problem determination + resolution.
Once this common speed problem is fixed, likely many (maybe all) speed problems will resolve.
PM me if you require assistance.
I would suggest moving hosts if at all possible, as it appears your hosting may be a little under powered.
WebsiteSpeedExperts is right on.
Because both your html + css are serving slowly, best guess is your Hosting company techies are clueless about how to setup high performance servers.
Your site looks to be running on a Soup Kitchen server too - many sites of unknown quality + traffic - so if your budget supports, consider switching to a dedicated server.
You need to choose a fast webhost. I host at NameTyper.com
You need to optimize your site. Since you are running Wordpress you should install a wordpress cache plugin. wp-cache.
In modern hosts control panel there is a function for “compressing” css, html, js. You just need to “push that button” and that will help you a bit on the way also.
Most modern hosts has one click install of CDN networks. I use CloudFlare since that is the one that is offerd at my host (and most ppl seem to think Cloudflare is the best)
My site loads in less then one second. Have a try:
http://statsskuld.se/?optimize=1 the ?optimize=1 in the end is to remove the google banners and such since those tend to slow down the site. I have choosen not to use CDN at the moment but I am sure the reuslts will be below 1 second either way if you test anywhere in europe. The site is swedish therefor I don’t need CDN.
Also, see if there is any really slow plugins installed at the moment in your wordpress becourse that seems to be very slow time to first byte.
If you only have one site, use a virtual host.
If you have many sites, acquire a dedicated machine.
If you only have one site, which is generating a truckload of income, acquire a dedicated machine.
With virtual hosting, you’re gambling the hosting company knows what they’re doing.
With a dedicated machine, you’ll require doing your own machine tuning + admin or hiring someone to preform these tasks on an ongoing basis.
|
OPCFW_CODE
|
Master degree and PhD in biomathematics
The following question was posted on Math SE, but seems to be more related to Academia SE:
Next year I will start studying maths at university. I'm highly interested in biomathematics, but in my country there aren't specific courses for students. At least there are very few Ph.D. programs. So I'm thinking of taking a 3-years degree course here and then a Master degree and Ph.D. in another country. Are there such courses in UK or US? If so, which are the entry requirements (in terms of, say, English language certificates, couses taken, etc)? How can I prepare to such courses? Are there any suggested readings?
I know that pretty much every credited and decently reputable university has some sort of bioinformatics or biomathematics program. Even the school I currently attend, Indiana State University, which by the way is an absolutely shit school for ANYTHING except business adminstration, aviation and education, have a masters level bioinformatics degree, though I'm unsure about any PhD program. However, I know that Purdue has a program for it's computer science BA students to have what is called a "focus" (very common in majors like CS because of the breadth of where you can take it- also, criminology is very common to have 5 or 6 focuses as well) in bioinformatic data systems, and you can then pursue a masters or PhD with said focus.
Basically, even though none of that really answered your question, pretty much every university you go to will have some sort of bioinformatics / biomathematics (which by the way, I don't know if they're the same thing because I've always heard it called bioinformatics, which is the math behind biology... so I'm assuming there the same thing) program, and if they don't, they will DEFINITELY have CS / Math programs that are completely relevant to the study and very easy to get you into grad school on the basis of the only thing you didn't learn about bioinformatics was the application of principles you learned from your math classes to said field.
Also, most universities here in the US don't have specific requirements for transfer students beyond language certification. I.e., if you can speak decent English, which you can, then you'll be fine. Most of the Arabs in my Econ & CS classes right now can't speak a word of english at all and they're here so you'll be fine
@usεr11852 you're right. Bioinformatics, from what I understand now, is the study of processing biological data.
However, I still contend that the fields are interchangable. Many Computer Scientists could very easily change there major to some sort of mathematics (though not all) in the final 2 semesters and still graduate on time due to the interconnectedness of the two. All computer scientists are part mathematicians, and at the PhD level nowadays, all mathematicians are part computer scientists. I'd wager the two fields are roughly the same
I believe Coursera.org has a few MOOCs in biomathematics (or at least courses very closely related to it).
Hi, Jochen. This really doesn't answer the question—the OP is referring to "courses" in the sense of "major," rather than "lecture."
"How can I prepare to such courses?" - Guess my answer points to a way that one could prepare for this kind of courses...
Bioinformatics is a bit of a collective name for many different cross disciplinary research fields. Essentially it's biology, mathematics, statistics and programming blended into a dough, and baked together. Consider bread, using the same ingredients you can make many different types of bread in the end.
It's more or less the same with bioinformatics/biomathematics. I am a last year bioinformatics PhD with less than 6 months to dissertation. So far the people I have met that do similar work as I do, I could probably count with fingers on one hand. :)
Instead of considering fields, and courses and programmes, consider which skills you want to acquire and what subjects you want to work on. Essentially, the question boils down to what do you think is cool? Are you interested in RNAseq, or GWAS? Are you interested in doing SAM, or pathway dynamics? Perhaps signal processing for MS-based proteomics?
There are literally thousands of interesting problems out there that require serious bioinformatics efforts. Which program you studied is a bit irrelevant as long as you have the right toolset of skills.
In Sweden there are many universities strong in bioinformatics. The requirements are not very high (they depend on the program), and everything is in English and free for UE citiziens.
|
STACK_EXCHANGE
|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45
Back To Interview list
In this issue, Michael Smith, in the cunning disguise of Judith Dinowitz, jumped on Michael Dinowitz and wrestled from him an interview the like of which has never been seen.
Michael Smith: I am here talking with Michael Dinowitz of House of Fusion fame about the presentation he will be giving at the upcoming CFUN on "Working with remote data". What kind of data are we talking about here, Michael?
Michael Dinowitz: The short answer is: Text data.
MS: You mean that's it?
MD: Well, honestly, when dealing with the Internet, the only data we really care about is text data. Email is text data. Web pages are HTML-formatted text data. RSS feeds are XML packets, which are just text data. Everything is text, and that is the data we're dealing with. So I'm waiting for you to ask the real question.
MS: So what is the real question?
MD: The real question is: Where will we be retrieving the data from and how will we be making use of it? Let's take my latest "fun" site, JewsLikeNews.com. This site gets updated every half hour with news, if the news exists. The question is: Where does this data come from. The answer is quite simple. Google emails it to me. This means that some location (Google) is sending me text in an email of a specific format that I can programmatically retrieve and enter into a database.
MS: So the format really makes a difference ...
MD: Exactly. Format is everything when dealing with remote data. Format allows you to separate data from markup from garbage. Take the google example that I just gave. Google sends different types of emails with different formats -- some with more useful data than others. My job is to decide what data is important so that I can write up a parsing program to always retrieve that data. I should never have to touch the JewsLikeNews site. It should all run automatically. The only time I'll ever have to rewrite my code is if Google changes their format.
MS: What if you don't know the format? And does the format change often?
MD: The ability to know the format is rather simple: You look at it and try to decide where the real data is. Is it the first line? Is it the first line after a blank line? Is it right after the letter P? All you have to do is get enough examples of the data (five emails from Google is enough for me) to see what the format is and where it changes. Once you identify what goes where, you can then write a program to deal with it. It's just pattern analysis. All you have to do is intuit what their pattern is.
As for how often the data changes, that all depends on who you're dealing with. I have a cute little agent that actually scans Macromedia's forums and emails me all the new posts. It would be simple to do if Fusetalk had an RSS feed or something else. But as things stand, I have to parse through multiple pages to get the data that I need. The data being outputted is in HTML here, with CSS there, and they've changed the format at least twice in the time that I've been running the agent. (And no, I won't actually be showing off this agent at the conference. What I will be showing off, on the other hand, is an Ebay agent that I've been using for a while that will retrieve all the items under a specific keyword. Now that's fun, because Ebay changes their format at least once a month, and the format is different if you go in with or without cookies.)
MS: So agents are one way of dealing with remote data. What other types of ways are there?
MD: When you say agents are a way of dealing with remote data, it's really important to define what an agent is, and what "dealing" means. An agent is a program whose job is to go out, get data from somewhere, parse through it for something that's important, and then do something with the parsed data. An agent can be as simple as emailing any time a news article is found with the keyword "Macromedia" to an entire content management system which will retrieve data, parse through it, check if the data already exists in a database, store it if it doesn't, and do some other operation if it does. An agent is a general term.
What you really think of is how we deal with data, as in, how do we parse it.
MS: So how do we parse the data?
MD: In reality, there's only one "true" way to parse data where you are not sure of the exact format that the data will be in, and that is Regular Expressions.
Regular Expressions is basically a sublanguage for defining text patterns. Using it, we can easily say what the patterns of data are, what comes before the important data, what comes after it, and what differentiates the important data from the formatting. That might sound a little confusing, so let's give a perfect example. Let's say we want some text that is within an anchor tag. We don't know what the attributes of that anchor is. We don't know that it has a target. We don't know the HREF. All we know is that we want the text that is going to be displayed as the link. Using Regular expressions, we can say, "Look for the beginning of an anchor. Include anything that is within the tag itself. Look for the ending tag that is right after the anchor. And then, using these two defined boundaries, get all the text that is being used as the link in that anchor."
MS: Sounds complicated.
MD: Actually, it's not. It's very simple once you know the basics of Regular Expressions. The problem is that Regular Expressions look like someone threw up across your screen. (Yes, Perl uses Regular Expressions all the time.)
I've given classes on Regular Expressions before, and people always come out of them nodding their head with a look of understanding in their eye. Anyone can learn them. It just takes a few simple rules, and then the ability to string those rules together. Take the anchor example from above. All you have to do is define the beginning of an anchor tag, define the end of the tag (which is a closed bracket) and grab everything between the beginning and the end of the anchor. You don't care what's there -- You're just defining the pattern that says an anchor tag starts with a <a and ends with an >. Following that will be the link text, which will go on until you get a closing anchor, which will always be </a>. It's all about patterns and definitions.
MS: Now you've made a complex subject sound very interesting.
MD: It is. It's actually more interesting than you think, because by using some of the agents which I'll be talking about, you can save time that would otherwise be spent searching through Ebay or searching through news ... You can actually write something that will do the work for you and deliver the exact content that you want. The time taken to write an agent vs. the time saved when searching for news that you find interesting makes this presentation definitely worth your while.
MS: Well, I'm someone that never has time to do anything, so I'll definitely try to find some time to stop in on some presentation. Will you be giving out code to save us time when building our own agents?
MD: Yep. All the code from my presentation is on the conference CD. Just put it in a computer, load it up and read through it. It's all commented. And I can tell you a story about comments ... but that's for another time.
MS: Well, thank you very much, Michael. I look forward to seeing you at CFUN.
MD: Thank you for your time.
|
OPCFW_CODE
|
holmes at catseye.idbsu.edu
Tue Jun 1 13:57:21 EDT 1999
Quoting (with approval!) Friedman thus:
It is not easy to talk about formalization in a clear way. Unless one is
almost unprecedently careful, one is surely going to be misunderstood. I
will attempt to be that careful.
I suggest that Simpson should carefully consider the possibility that
he is misunderstanding Conway's intentions in his "foundational"
remarks in _On Numbers and Games_. I see no "anti-foundational views"
in this passage; neither, I think, do many other respondents in the
discussion thus far.
What Conway is opposing (if he can be said to be opposing anything) is
"formalization in some specific system", with the accent on commitment
to some specific system, rather than on formalization. I think that
Conway can be understood as objecting to views which regarded some
specific system (say ZFC) as the indispensible core of foundations of
mathematics: of course, Simpson either holds or appears to hold views
of exactly this kind, which Conway might be expected to find
objectionable on the basis of the remarks in ONAG.
But it is also clear from Conway's remarks that he regards the
possibility of formalizing mathematical results in specific formal
systems (he mentions ZFC) as an important standard of reliability; he
hopes for results which will allow us (under suitable conditions) to
certify reliability under this standard without actually having to
carry out the formalization, which would not be at all a bad thing, if
it were possible.
There is nothing in this which suggests that Conway is opposed to
formalization per se or does not recognize that it is important;
rather the reverse.
So much for what I think Conway is saying.
As for what I think (Simpson asked for _my_ views...)
Bodies of mathematical knowledge are prior to formalizations, which are
a retrospective activity. This is less obvious in the case of set theory
(and very much less obvious in the case of logic) where formalization and
mathematical investigation go hand in hand, but I think that there is truth
in this even there.
Using an analogy which Conway makes in his discussion in ONAG, formalizations
can be regarded as analogous to coordinate systems. A coordinate system
is a valuable tool for talking about a space; one should note, though, that
the same space admits different coordinate systems, and that one can also
learn things about the space by investigating invariants which are independent
of the coordinate system used.
In an area that I know about, much the same piece of the mathematical
universe is formalized by the following theories:
1. Russell's theory of types with infinity and choice (as simplified
2. Mac Lane set theory (Zermelo set theory with comprehension restricted
to delta-zero formulas)
3. NFU + Infinity + Choice
4. topos theory ("category theoretic foundations") with a natural
These systems have precisely the same consistency strength (I might
have to specify 4 more precisely to make this true). Classical
mathematics (outside set theory) can be formalized in any of these
systems (with the same caveat about 4; I'm not sure that just having a
natural numbers object is enough to give the needed consistency
strength; of course one has to do some kind of double negation
interpretation to interpret classical results in the intuitionistic
logic of 4 -- perhaps Colin McLarty could help here?).
I would find any of the systems 1-3 about equally easy to work in (though
1 is notationally awkward). I would find 4 _extremely_ hard to work in.
As regards mathematical intuition, I find 1 and 2 equally reliable
(bedrock!); I know that 3 is reliable (this relies on mathematical
results (Jensen's consistency proof for NFU) which I would think of
initially as carried out in 1 or 2); I believe that 4 is reliable due
to mathematical work (known to me mostly by reputation) which I would
regard as having been carried out in system 2 or extensions (much
harder than the work needed to certify 3 as reliable!)
The reliability of 1 or 2 for me rests on direct intuition of what the
theories are about (my preformal understanding of set theory, typed or
untyped), plus the fact that extensive work in these systems has
revealed no inconsistency (if a contradiction were found I would
conclude that my preformal intuitions had been incoherent). I am
willing to regard 3 as similarly resting on a direct intuition, but I
don't regard this intuition as obviously correct (the proof of the
consistency of NFU is needed to firm it up for me, but I do understand
this proof...); however, I can regard 3 as an autonomous foundation,
because 3 is an extension of the system 1 (in a direction unexpected
to a ZF-iste, and only if 1 starts without strong extensionality), and
the consistency proof needed to justify 3 can be carried out in 1. I
have no intuition at all (or very little) for 4, and find it hard to
understand how 4 can be regarded as an autonomous foundational
proposal (though I can imagine that the intuitive understanding of 4
needed for this could be developed).
I would be willing to use any of the systems 1, 2, or 3 (or extensions
-- note that ZFC is an extension of 2, and my reasons for restricting
to 2 have to do only with the fact that this makes it easier to get
systems with exactly the same strength) as my official (formalized)
foundation for mathematics. Results in classical mathematics obtained
from any of these formalizations are demonstrably equally reliable. I
do not think that anything is gained by a fanatical devotion to one of
these systems (even 3 :-) ) as opposed to any of the others. I would
find it extremely difficult to use 4 as an official foundation, and I
am curious about the mindset that makes this appear possible...as I
suppose some are curious about the mindset that makes it possible to
consider 3...but this does _not_ mean that I regard it as impossible to
sincerely adopt 4 as a foundation.
I have used Conway's analogy in my own thinking: 1,2,3 and perhaps 4
are "coordinate systems" covering essentially the same mathematical
terrain. Results in any of these systems can be "translated" with
more or less difficulty to any other of these systems (1,2,3 being
quite close to each other and 4 rather distant from the others in
terms of ease of communication). What I am actually studying is not
any of these formal systems, but the "terrain" itself. If a program
like what Conway proposes in ONAG were possible, I might have tools
which would allow me to evaluate another proposed formal system 5 and
recognize (under suitable conditions) that 5 was another map of the
same terrain. No commitment to the existence of "mathematical
terrain" (platonism) is required for such a program to make sense; it
could equally well be expressed in terms of features of formal
theories. Note that such a program could only be carried out with the
full (and very clever) use of the same kinds of logical tools that
are needed in the formalization of mathematics in a fixed system.
It isn't clear to me that an adequate description of "invariants" of
formal theories that would certify them as being as reliable as (say)
ZFC would not be effectively a formalization of mathematics in its own
right, usable as an official foundation of mathematics. (I.e., I think
that a program like Conway's might be formally interesting, but I think
that the result might simply be another formulation of ZFC foundations).
I return to Friedman's quote and reiterate his warning that these things
are extremely hard to talk about!
And God posted an angel with a flaming sword at | Sincerely, M. Randall Holmes
the gates of Cantor's paradise, that the | Boise State U. (disavows all)
slow-witted and the deliberately obtuse might | holmes at math.idbsu.edu
not glimpse the wonders therein. | http://math.idbsu.edu/~holmes
More information about the FOM
|
OPCFW_CODE
|
Search for a matching string within a group based on a keyword
Scenario: Consider a restaurant in which whenever new items have to be added to the menu, the items have to be first approved or denied by the restaurant manager. The items that are denied or have a blank response are stored in a table called denials which is shown below.
Upon further examination, it is observed that some entries have a blank response because, for some items, the manager leaves the response as blank and instead adds another row manually and then marks that row as denied.
For example, the manager left blank responses for strawberry cheesecake and raspberry cheesecake and added a new row with the item berry flavored cheesecake and marked it as denied. This manual entry accounts for denials of both strawberry and raspberry cheesecake. We see similar examples in other groups, for instance, double carrot cake and tomato cake are collectively marked as denied by creating a new manual entry row for veggie flavored cake.
The requirement is that for each row with a blank response, we need to find the equivalent item which has a denied response. In order to tackle this problem, we have already created a keyword lookup table called lookup that contains the keyword which needs to be searched for in the item column in order to find the matching item with a denied response.
The final desired output is shown below:
Note: The resulting matching item should be within the same group. For example, in group D, cranberry cheesecake has a blank response. The lookup keyword for cranberry cheesecake is berry. So if we ignore the group column, then the matching item would be berry flavored cheesecake. However, the scope for searching has to be within the same group i.e. group D. Since group D does not have any item that matches the lookup keyword berry, the expected result is no response.
I am not able to figure out how to get the desired output. I need help with that.
Below is the SQL script to create the schema and data for denails and lookup tables:
CREATE TABLE [denials](
[group] [nvarchar](50) NOT NULL,
[item] [nvarchar](255) NOT NULL,
[manager_response] [nvarchar](255) NULL
)
CREATE TABLE [lookup](
[item] [nvarchar](255) NOT NULL,
[lookup_keyword] [nvarchar](255) NULL
)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('A', 'lemon cheesecake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('A', 'strawberry cheesecake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('A', 'raspberry cheesecake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('A', 'berry flavored cheesecake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('B', 'apple cheesecake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('B', 'blueberry cheesecake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('B', 'orange cheesecake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('B', 'double carrot cake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('B', 'tomato cake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('B', 'veggie flavored cake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('C', 'red grapes cheesecake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('C', 'green grapes cheesecake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('C', 'grape flavored cheesecake', 'denied')
INSERT [denials] ([group], [item], [manager_response]) VALUES ('D', 'cranberry cheesecake', NULL)
INSERT [denials] ([group], [item], [manager_response]) VALUES ('D', 'cinnamon cheesecake', 'denied')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('strawberry cheesecake', 'berry')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('raspberry cheesecake ', 'berry')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('double carrot cake', 'veggie')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('tomato cake', 'veggie')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('red grapes cheesecake', 'grape')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('green grapes cheesecake', 'grape')
INSERT [lookup] ([item], [lookup_keyword]) VALUES ('cranberry cheesecake', 'berry')
Why not store the full matching name in the lookup table and simplify the whole problem?
Try this:
SELECT A.[group]
,A.[Item]
,A.[manager_response]
,C.[Item]
FROM [denials] A
LEFT JOIN [lookup] B
ON A.[item] = B.[item]
LEFT JOIN [denials] C
ON A.[group] = C.[group]
AND C.[item] LIKE B.[lookup_keyword] + '%'
AND B.[item] <> C.[item];
|
STACK_EXCHANGE
|
UPDATE #2 (2010-10-11): The CSAW CTF Team has published the official solutions of all of the challenges. Take a look here.
First, thanks for CSAW team for creating this CTF, it was amazing :). Also, thanks to the people in my team, the next time we´ll do better ;p.
I'm usually scared of crypto challenges, but on CSAW i approach them on a creative/lazy way. Forget your crypto, the latest padding attacks and whatever. We are hackers, we usually find a way to solve things, but this time won't be on the elegant way ;p.
Do you want to solve 3 challenges (1200 points) on 30 min?
After entering your username/team name, the server issue a new cookie (SID) that contains a value ciphered with Base64.
Looking at the page, we obtain a role=5, but we need to obtain a role 0 to obtain the key. So let's play a bit with the cookie, changing its values to see what happen...
You're role is level 5, but you need a role level of 0 to continue (normal message)
('need more than 1 value to unpack',)
Reason: Sorry, an error had occurred.
Reason: Sorry, an error has occurred. (strange, "has" instead of "had")
File "csaw.py", line 372, in challenge1
padding_length = struct.unpack("B", ptext[-1])
IndexError: string index out of range
File "csaw.py", line 367, in challenge1
ptext = aes_decrypt(sid.value, CSAW_CRYPTO_1_KEY, codec='base64')
File "/home/csaw/csaw/utils.py", line 122, in aes_decrypt
IllegalBlockSizeError: Input length must be multiple of 16 when decrypting with padded cipher
Ok, so we have an AES crypto scheme with padding, with a lot of error messages, but wait... As the application decrypt the cookie, can we manage to create a cookie which decrypted fool the application to give us a role=0?.
Let's launch Burp (an amazing tool, btw) and use the Intruder feature, selecting the payload "bit flipper". Burp will flip one bit of every char of the original cookie.
GET /challenge1 HTTP/1.1
We wait Burp to finish the attack, and after 688 request, we take a look of the results. Luckily, I found 7 different cookies values that give us the flag...
Congratuations CHA (of team LENGE)! You have successfully completed
CSAW 2010 Crypto Challenge #1.
Here's your flag: 43fb994b59e8bb99d99ef969d773ea98
Similar challenge that before, however the previous trick is not working. However, on some errors it can be spotted the following error message.
You're role is currently level L, however this area requires a role level of 0.
So, we managed to alter the level with a modified cookie. I analysed the cookie, looking for which modifications on the original cookie allows changing the user level.
On my case, seems that 3 chars could potentially affect the role level...
So, again, launch Burp and create an "Intruder attack", but this time, we'll create a brute force of this 3 characters, using the following charset [a-zA-Z0-9]. Yep, around 200k request, but we'll finish before that.
Launch the attack, and after 1800 requests we take a look of the results...
Congratuations guest (of team guest)! You have successfully completed
CSAW 2010 Crypto Challenge #2.
Here's your flag: 8ee38021f40ef94e6725e9be07b49951
We solved that! and around 20 requests (of 1800) gave the correct answer...
I lost the logs for this challenge, but seem that it contains a critical bug. As it can be observed on the statistics, 33 Teams solved the CRYPTO 3, too high compared to the teams which solved CRYPTO1 or CRYPTO2 (16-18), so something was wrong with this challenge ;p.
If we read the tip from the previous challenge:
For the next challenge, you need to specify to impersonate the Administrator user
So, if we issue the following GET request, we had been solved this challenge (observe that it's a silly modification of the username, from Guest to Administrator)
As promised, I solved the challenges very quickly (around 30 min):
It is necessary to remember that sometimes it doesn't matter how you solve the challenges, just need to solve it quickly, specifically on CTF :).
|
OPCFW_CODE
|
Could this long sentence be reworded to a shorter and yet easier to understand sentence?
I came across the following sentence in a podcast by DW:
Es ist immer einfach, das zu tun, was die Mehrheit tut.
I wondered why it is weirdly worded. As a Learner, I'd reword it to this:
Es ist immer einfach was die Mehrheit tut zu tun.
Is it correct? And am I correct in saying that the wording of the first sentence is quite weird?
I'm sorry to say I find the wording in the first sentence OK and your re-arrangement weird.
The sentence is quite short in my opinion. Its complexity originates in the generality the sentence keeps (likely to be exemplified in the text to follow). So the only way I see to simplification is: get specific what the process is, in which one wants to mirror the majority. Then split: Die Mehrheit tut (placeholder). Es ist einfach, auch (placeholder) zu tun.
@tofro: Perhaps the structure dividable to three parts which "das" and "was" is misleading. I am not sure why but the original sentence still sounds weird to my ears
You could actually drop the das and say: Es ist immer einfach, zu tun, was die Mehrheit tut. In that case was relates back to zu tun.
@RalphM.Rickenbach Could the first comma also be removed in that case?
Yes, @Gigili, it is optional.
Analysing the original sentence, we have a main clause which contains an extended infinitive construction, which in turn has a depending relative clause.
Es ist immer einfach, {das zu tun, {was die Mehrheit tut.}{rel. clause}}{inf. const.}
The wording of this is in no way perceived as weird, as a number of German speakers have already answered here. Why is that? Well, clearly the ‘was die Mehrheit tut’-relative clause depends only on the ‘das zu tun’ infinitive, so it makes sense having it after that. (German relative clauses typically follow what they are describing.) And of course, German has a tendency of putting infinitive constructions at the very end of a sentence. So this order makes sense, if you consider the relative clause as being part of the infinitve.
What about your suggested reordering?
Es ist immer einfach, {{was die Mehrheit tut,}{rel. clause} zu tun.}{inf. const.}
The main problem here is that relative clauses don’t precede often. They do sometimes, especially if they refer to something rather general:
Was die Mehrheit von mir denkt, interessiert mich doch nicht.
It really just seems so much more natural to have the relative clause follow the infinitive in this case, though. I wouldn’t want to call it ‘wrong’, but if I were marking, I would underline it with a squiggly line to mean ‘not a good way to express it’.
Also note that the relative clause must be flanked by a pair of commas as I added here.
Maybe in this case it is helpful for you to understand the sentence structure if we made a translation:
Es ist immer einfach, das zu tun, was die Mehrheit tut.
It is always easy(ier) to do what the majority does.
Maybe this is a special case as a German sentence structure wouldn't always fit the English structure. But as said in this case one can see clearly that your wording sounds weird:
Es ist immer einfach was die Mehrheit tut zu tun.
It is always easy(ier) what the majority does to do.
I hope somebody could help you with a gramatical explanation!
A simpler way to express the same it maybe:
Es ist immer einfach(er) der Mehrheit zu folgen.
Zu folgen: means to follow, do alike.
An other option could be:
Es ist immer einfach, zu tun, was die Mehrheit tut.
But I assume it will still sound weird to you, as the difference to the starting sentence is not that big and so is the difference in the meaning!
Es ist immer einfacher, was die Mehrheit tut, zu tun.
I would say theoretically this is not wrong (but you need the commas)!
But even if this is not wrong, it sound extremely weird and isn't really nice German.
Everybody will understand you, but if you want to learn nice German, you should use it as it was used in your Podcast.
An other nice possibility would be to change some words of the sentence, like Medi1Saif said before:
Es ist immer einfach(er) der Mehrheit zu folgen.
I don't think that you can insert that clause there. There are a few variations that are grammatical but which would make the sentence really clumsy, like starting the sentence with "Das, was die Mehrheit tut" or even with "Was die Mehrheit tut, das". But splitting the sentence above in that awkward way, feels totally off to me.
It sounds also totally of to me, but I think it isn't really wrong.
This is wrong - it should be "Es ist immer einfach(er), der Mehrheit zu folgen.".
@MichelleH Are you referring to the comma? It may well be optional according to the latest rules …
|
STACK_EXCHANGE
|
Protection against CSRF added
Details sent to<EMAIL_ADDRESS>in messages
Wed, 30 Nov 2022 12:47:33 +0100.
Fri, 2 Dec 2022 19:59:02 +0100
Author-Change-Id: IB#1129006
I actually have a feeling that we should not be allowing the ReverseProxy Auth on the API - it's the only consistent way of making the API work
I don't fully understand why you have added another dependency library for the cors instead of making our current one work or replacing our current one.
it's the only consistent way of making the API work
What inconsistency you see in using one auth method for all API calls?
I don't fully understand why you have added another dependency library for the cors instead of making our current one work or replacing our current one.
https://github.com/go-chi/cors does not provide OriginAllowed method and it looks like even maintainer is not sure why this fork is still requred for. Existing CSRF code was not analyzed and left untouched; if this PR is accepted - existing token based CSRF stuff should be checked and (if is still required as separate line of defense) may be switched to github.com/rs/cors or removed (if not required when browsers compatible with fetch metadata headers will be required for gitea).
For every other authentication mechanism we now require that API calls are separately and explicitly authenticated - this makes ReverseProxy the odd one out.
If account A can make change X using "web API" why shouldn't be allowed to make change X using "API"? Why to "mess" with separate API areas and separate sets of auths for both? Any real risks?
If separation is required - why not to introduce user permissions to access given API area (i.e. "[x] access to web", "[x] access to API) as alternative to application tokens (which should be optional)?
This PR means we now have TWO libraries adding CORS headers.
Code in this PR does not handle CORS requests it only uses CORS stuff to decide if some requests are valid. If this PR is accepted - existing CORS may be switched to github.com/rs/cors to avoid using two CORS libs (should not hurt as temporary solution).
If account A can make change X using "web API" why shouldn't be allowed to make same change X using "API"? Why to "mess" with separate API areas and separate sets of auths for both? Any real risks?
I didn't complete agree with the decision to do this - but it was done. ReverseProxy currently stands as the odd one out.
The idea is that you have to authenticate explicitly for the API separately from the UI.
If separation is required - why not to introduce user permissions to access given API area (i.e. "[x] access to web", "[x] access to API) as alternative to application tokens (which should be optional)?
Or use tokens that we already have.
This PR means we now have TWO libraries adding CORS headers.
Code in this PR does not handle CORS requests/reponses it only uses CORS stuff to decide if requests are valid during CSRF validation. If this PR is accepted - existing CORS may be switched to github.com/rs/cors to avoid using two CORS libs (should not hurt as temporary solution).
That's not the way to do things, it won't get done and we'll end up with 2 slightly different CORS libraries. The go-chi/cors library should be replaced in this PR. I won't approve this PR without that although I'm not going to block it.
|
GITHUB_ARCHIVE
|
Writing good documentation is a mission-critical aspect of building software these days. Whether you are writing something for yourself, for the Open Source community, or your company, it's essential to keep things documented for good reasons:
- It makes developers more productive since they can rely on the docs to understand how to use your software.
- It makes developing the software itself easier promoting knowledge sharing amongst developers.
The process of writing documentation it's often tiresome. Imagine if you need to worry about building the whole foundation to make your documentation available to the public. Amongst many options to provide such foundation, I enjoy Docusaurus the most because:
- It's opinionated, provides you with all the structure you need to ship content fast. Docusaurus will decide how to structure your content, manage plugins, etc.
- Allows further customization with React. You can apply your knowledge of React to extend your documentation with other UI components built in React, as you see fit.
- It's Markdown all the way. Just like Gatsby or Hugo, the seamless interpolation with Markdown makes the experience of writing docs a breeze.
- It's GitHub pages friendly. GitHub pages are one of the sweethearts of free website hosting, especially when hosting documentation for open source projects. It's easy to set up and super convenient. Docusaurus docs have a guide just for this purpose.
- ; this is why I would pick Docusaurus to write docs over and over again. The sharp focus on documentation makes the whole solution even more attractive since you'll find the default way pages are structured very suitable to write documentation. Of course, the Docusaurus ecosystem is vast; simply looking at their showcase page to understand its use goes way beyond simple static documentation websites.
Here are a few tips & tricks that will help you build Docusaurus sites, get past some technical challenges, and better know what Docusaurus has to offer.
MDX is fantastic. It allows you to interpolate Markdown with JSX (React components). It's incredible to be able to simply plugin a React component. Here's a snippet from this blog post markdown file, where we include a React component to collect feedback at the bottom of the article.
You're reading a blog post built on Docusaurus. We use the Blog features to create content. Most things will be there waiting for you to start; add a markdown file under the
blog/ folder and start typing!
Probably one of the handiest features is the ability to access the content of your
docusaurus.config.js through the context React hook. Here's a small component that builds a social media sharable link using information from the config file (this same link produces the sharable paragraph at the bottom of our articles).
index.html to add globals to your application. An alternative would be to ship your global code, i.e., code that you want to run on every page (e.g., a cookies banner component) in your
theme/Footer/index.js. Because Docusaurus includes the footer in all pages, you can write logic that will execute every page, for example, by mounting a React component.
I became aware of the existence of clsx while working with Docusaurus. Look at it whenever you're in need to apply some CSS class conditionally. Here's a small example.
footer--dark would be applied if the condition
footer.style === 'dark' is met.
People delete sites and move sites all the time. It's an awful experience to read a page, click on a reference, and end empty-handed, facing a blank page. Docusaurus offers an out-of-the-box solution to scan for broken links. I highly recommend the usage of to warn you of broken links in your content; this will make it very hard or even impossible for you to ship broken links! Here's how to configure it in your
If you are migrating from another stack and plan on introducing new routes, this small redirect trick will be super useful. You can redirect old routes to the new ones by creating a simple markdown file with the name of the old route, and redirect users to your shinny new page.
It's super easy to set up analytics with the plugin-google-analytics. Here are the steps:
- Install the plugin
- Add append this object into your
And that's all. I hope the above tips unblock your journey of content creation with Docusaurus.
If you liked this article, consider sharing (tweeting) it to your followers.
|
OPCFW_CODE
|
namespace ReportGeneratorUtils
{
using System;
using System.IO;
using System.Reflection;
using System.Threading;
using System.Xml;
using System.Xml.Xsl;
public static class XmlToHtmlTransformer
{
private const string DEFAULT_XSLT = "Templates\\HTMLReport.xslt";
/// <summary>
/// Gets the default XSLT path.
/// </summary>
/// <returns>Default XSLT template file path</returns>
private static string GetDefaultXsltPath()
{
return Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), DEFAULT_XSLT);
}
public static string TransformToHTML(string xmlReportData, string xsltFilePath = null)
{
return TransformToHTML(xmlReportData, CancellationToken.None, xsltFilePath);
}
/// <summary>
/// Transforms to HTML.
/// </summary>
/// <param name="xmlReportData">The XML report data.</param>
/// <param name="xsltFilePath">The XSLT file path. If not passed the default XSLT template will be used.</param>
/// <returns>String that results after the XSLT applied on the XML string</returns>
public static string TransformToHTML(string xmlReportData, CancellationToken cancellationToken, string xsltFilePath = null)
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
if (string.IsNullOrWhiteSpace(xsltFilePath))
{
xsltFilePath = GetDefaultXsltPath();
}
string finalReportString = String.Empty;
using (StringReader xsltStringFromFileReader = new StringReader(File.ReadAllText(xsltFilePath)))
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
// xslInput is a string that contains xsl
using (StringReader xmlReportStringReader = new StringReader(xmlReportData)) // xmlInput is a string that contains xml
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
using (XmlReader xsltFileXmlReader = XmlReader.Create(xsltStringFromFileReader))
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
XslCompiledTransform xsltCompiledTransformation = new XslCompiledTransform();
xsltCompiledTransformation.Load(xsltFileXmlReader);
using (XmlReader xmlReportXmlReader = XmlReader.Create(xmlReportStringReader))
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
using (StringWriter outputStringWriter = new StringWriter())
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
using (XmlWriter xsltXmlWriter = XmlWriter.Create(outputStringWriter, xsltCompiledTransformation.OutputSettings)) // use OutputSettings of xsl, so it can be output as HTML
{
if (cancellationToken.IsCancellationRequested)
return string.Empty;
xsltCompiledTransformation.Transform(xmlReportXmlReader, xsltXmlWriter);
finalReportString = outputStringWriter.ToString();
}
}
}
}
}
}
return finalReportString;
}
}
}
|
STACK_EDU
|
package interactions
import (
"context"
"crypto/ed25519"
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"oscen/tracer"
"go.opentelemetry.io/otel/attribute"
"github.com/Postcord/rest"
"github.com/Postcord/objects"
"go.uber.org/zap"
)
type handler = func(
ctx context.Context,
interaction *objects.Interaction,
interactionData *objects.ApplicationCommandInteractionData,
) (*objects.InteractionResponse, error)
type Interaction struct {
*objects.ApplicationCommand
handler handler
}
type router struct {
routes map[string]handler
interactions []*Interaction
rest *rest.Client
log *zap.Logger
publicKey ed25519.PublicKey
}
func NewRouter(log *zap.Logger, publicKey ed25519.PublicKey, rest *rest.Client) *router {
return &router{
rest: rest,
routes: map[string]handler{},
interactions: []*Interaction{},
log: log,
publicKey: publicKey,
}
}
func (r *router) Register(interactions ...*Interaction) error {
for _, i := range interactions {
r.log.Info("registering command with router", zap.String("name", i.Name))
r.routes[i.Name] = i.handler
r.interactions = append(r.interactions, i)
}
return nil
}
func (r *router) SyncInteractions(guildId *objects.Snowflake) error {
usr, err := r.rest.GetCurrentUser()
if err != nil {
return err
}
for _, i := range r.interactions {
if guildId != nil {
_, err = r.rest.AddGuildCommand(usr.ID, *guildId, i.ApplicationCommand)
if err != nil {
return err
}
} else {
_, err = r.rest.AddCommand(usr.ID, i.ApplicationCommand)
if err != nil {
return err
}
}
}
return nil
}
func (r *router) verifySignature(req *http.Request, body []byte) error {
signatureHeader := req.Header.Get("X-Signature-ED25519")
timestamp := []byte(req.Header.Get("X-Signature-Timestamp"))
signature, err := hex.DecodeString(signatureHeader)
if err != nil {
return fmt.Errorf("could not decode signature: %w", err)
}
if !ed25519.Verify(r.publicKey, append(timestamp, body...), signature) {
return fmt.Errorf("invalid signature")
}
return nil
}
func (r *router) handleCommand(ctx context.Context, interaction *objects.Interaction) (*objects.InteractionResponse, error) {
r.log.Debug("interaction.handle_command", zap.Any("data", interaction))
ctx, childSpan := tracer.Start(ctx, "interactions.handle_command")
defer childSpan.End()
commandData := &objects.ApplicationCommandInteractionData{}
err := json.Unmarshal(interaction.Data, commandData)
if err != nil {
return nil, err
}
childSpan.SetAttributes(
attribute.String("io.oscen.command_name", commandData.Name),
attribute.String("io.oscen.discord_user", fmt.Sprintf("%d", interaction.Member.User.ID)),
)
handler, ok := r.routes[commandData.Name]
if !ok {
return nil, fmt.Errorf(
"cannot find handler for interaction: %s", commandData.Name,
)
}
return handler(ctx, interaction, commandData)
}
type httpStatusErr struct {
Code int `json:"code"`
Cause error `json:"error"`
}
func wrapErrorForHTTP(code int, cause error) *httpStatusErr {
return &httpStatusErr{
Code: code,
Cause: cause,
}
}
func (r *router) handleRequest(req *http.Request) (interface{}, *httpStatusErr) {
body, err := ioutil.ReadAll(req.Body)
if err != nil {
return nil, wrapErrorForHTTP(401, err)
}
err = r.verifySignature(req, body)
if err != nil {
return nil, wrapErrorForHTTP(401, err)
}
interaction := &objects.Interaction{}
err = json.Unmarshal(body, &interaction)
if err != nil {
return nil, wrapErrorForHTTP(401, err)
}
switch interaction.Type {
case objects.InteractionRequestPing:
return objects.InteractionResponse{Type: objects.ResponsePong}, nil
case objects.InteractionApplicationCommand:
response, err := r.handleCommand(req.Context(), interaction)
if err != nil {
return nil, wrapErrorForHTTP(500, err)
}
return response, nil
}
return nil, wrapErrorForHTTP(404, fmt.Errorf("could not handle request"))
}
func (r *router) ServeHTTP(w http.ResponseWriter, req *http.Request) {
data, err := r.handleRequest(req)
if err != nil {
r.log.Error("failed to handle request", zap.Error(err.Cause))
body, marshalErr := json.Marshal(err)
if marshalErr != nil {
w.WriteHeader(500)
_, _ = w.Write([]byte("fatal error serialising error"))
return
}
w.WriteHeader(err.Code)
_, _ = w.Write(body)
return
}
if data != nil {
body, marshalErr := json.Marshal(data)
if marshalErr != nil {
w.WriteHeader(500)
_, _ = w.Write([]byte("fatal error serialising response"))
return
}
r.log.Debug("writing response", zap.Any("data", data))
headers := w.Header()
headers.Set("Content-Type", "application/json")
w.WriteHeader(200)
_, _ = w.Write(body)
return
}
w.WriteHeader(204)
}
|
STACK_EDU
|
Since 2014, I have published six books in print editions as well as ebook. My latest novel, She Who Returns, will be the seventh. Unless I decide it’s not worth the effort.
All right, I’m dramatizing. But really, you’d think that by now I would be familiar with the steps and the process would be routine.
I’ll bet you’re expecting a rant about formatting the Word document. Well, no. Or at least not yet. This is about getting through Amazon’s quality checks. After my experiences with correcting errors in a previously published book, I didn’t expect it to be easy.
In fact, even before I started, I was a nervous wreck, anticipating hurdles and hoops and cryptic warnings that would drive me to appeal to the the Help people, like a bewildered newbie instead of a seasoned self-publisher.
I was right.
Take the ISBN, for example. When setting up my previous six books (on CreateSpace and its successor Amazon KDP Print), I entered the 13-digit ISBN without the hyphens inserted by the issuing agency (Library and Archives Canada, in my case). This time, I was admonished via a popup that I had failed to enter an ISBN, even though all 13 of its digits were right there in the appropriate slot. With no other explanation, I appealed to the Help folks by email. Within 24 hours, as promised, I received a reply suggesting I should enter the ISBN as issued by the official body, including the hyphens. Great, except it would have saved everyone time and aggravation if that requirement had been right there on the book setup page, instead of useless accusations of failing to enter the information. And another thing–you are now encouraged to supply the imprint associated with your ISBN. As a self-publisher, the imprint is your name, unless you have a “publisher” name (“Desperado Press,” for example) registered with your ISBN source (such as Bowker, LAC, the National Library of New Zealand, etc.).
The next big challenges were the interior (text) file and the cover. I uploaded the PDF of the text file successfully, it seemed, but I was unable to invoke the Print Previewer, which would notify me of errors, such as incursions into the gutter no-go zone, or… who knew what else? But I couldn’t open the Print Previewer until I had uploaded the cover image. That’s another annoyance–it should be possible to use the Previewer as soon as the text file is uploaded. If there’s a margin problem, fixing it could result in a larger page count, which could affect the spine width. If an author has hired a cover designer, it would be awkward to have to ask for changes (and possibly pay extra for them).
At least my cover image (designed and created by me on Canva) uploaded successfully. I invoked the Print Previewer and was notified that fonts were not properly embedded in my Word document (never mind that I had precisely followed Amazon’s instructions on how to do that). Amazon had apparently embedded them for me, but warned that some features of my book might not look right when printed. Twenty-one instances were flagged with an “i” in a circle. Supposedly the “i” means “information,” but all I saw when I clicked on it was a tiny black square.
The Help person who answered my question about that simply trotted out the party line about embedding fonts as per instructions, which I had already done. Yes, I would have to fix the problems with the fonts in my document. If following the Amazon instructions didn’t do the trick, there was a hint that I should consult Microsoft about how to work with Word.
In a pig’s eye, as some would say.
Instead, I sat down and did some thinking. If unembedded fonts were causing the problem, surely every page would be flagged? Why only those 20 pages? They were actually all the right-hand (odd numbered) pages in the first three sections of numbered pages. And as always, the problem was in the header of those three sections. (Word’s headers and footers are the very devil!)
To shorten a long, tedious tale, it turned out that even though the book’s title in the header was in Copperplate Gothic Light font, as I intended, Word’s default Arial font was also living in the headers of those pages, even though there was no text in Arial. Repeated attempts to change it led nowhere, except to the brink of sanity. I finally found the solution by moving the cursor along the header space while watching the font dropdown (in the Home tab). At a certain point, the font in the dropdown changed from Arial to Copperplate. So I highlighted the empty space where Arial was manifesting and changed that to Copperplate. The change finally stuck. I rejoiced.
When I uploaded the PDF I created after these changes, the Print Previewer still grumbled about fonts not properly embedded, but there were no more problem spots flagged.
I have approved the book’s content file and ordered a proof copy. If that looks okay, this saga will end happily.
In the meantime, here are my tips for other self-publishers who want to produce a print edition:
- Ask yourself if you really, really want to hold that wad of paper and ink in your hands. Because it may well cost you time, money, or both, to achieve it. You may experience strong emotions and swear a lot.
- Keep your font choices simple. Don’t use free fonts downloaded from the internet; I understand they can be impossible to embed. I stuck to fonts already in Word (Copperplate Gothic Light and Palatino Linotype), but even they were problematic. To be honest, I don’t know which fonts would work without problems. Arial and Times New Roman, maybe? Judging by what I found by googling, font problems are common in Amazon’s POD publishing.
- Adobe Reader can supposedly tell you if your fonts are embedded. Click on File in the top left corner and select Properties in the resulting window. Then click on the Fonts tab. This is what alerted me to the presence of Arial in my document. I knew I hadn’t used that font anywhere. (But note: even though Adobe had “Embedded subset” next to all my font types, Amazon’s Previewer still said the fonts weren’t embedded properly. So who knows…)
- Seek out and read Amazon’s instructions for publishing paperbacks. There are a lot of them, and some are even helpful. But they don’t cover all eventualities, from what I’ve seen.
- If you need to appeal to Amazon KDP’s Help, I think email is a better way to contact them than by phone. For one thing, you can attach files of your documents. But the individuals who respond may not know that much more than you. Be prepared to figure things out.
IfWhen you get desperate enough to look for help on the internet, think about how you word your searches and be prepared to change them if the results you’re getting aren’t relevant. You will find evidence that others are having problems at least as bad as yours. On the other hand, every situation is different, and there’s a lot of useless advice out there.
- You can upload a succession of revised PDFs as you make changes, as many as you have to, and see what the Print Previewer tells you after each one. I think it took me five or six tries before the problem flags disappeared.
- I worked with a single Word document (which I named She_Who_Returns_print), from which I produced my succession of PDFs. As each PDF turned out to have problems, I renamed it, adding _bad1, _bad2, etc. to the end of the filename. That way, I knew which ones I could safely delete at the end. (And it might be a good idea to Save As a copy of the almost-but-not-quite-good-enough Word doc as a backup, in case your efforts to fix problems end in disaster and you have to start from scratch.)
- Don’t add to the stress by creating a hard deadline for publishing your print edition. If you must have copies by a certain date, for an event such as a launch or book-signing session, build in a lot of time to get the job done. Start sooner rather than later.
- If all this makes your head spin, consider hiring someone to do your formatting. I’ve never done that, so have no advice for finding a competent individual, or any idea how much it might cost. I have heard that using Amazon’s print book templates is easier than formatting from scratch. I’ve never used them, but maybe I should next time. If there is a next time.
- Cultivate patience. Don’t take publishing rage out on innocent persons, pets, or computers. (Rest assured–I haven’t.)
Remember, She Who Returns is on pre-order until May 1st, attractively priced, along with She Who Comes Forth, the first book in the set.
|
OPCFW_CODE
|
Enable Auto Merging PRs when status checks are required.
This is a follow up to #865 and #975
If a branch is protected and has the flag Require status checks to pass before merging enabled, then I don't see how I could use the existing feature.
Because until those checks aren't successful, I just see a grayed out button and can't press anything:
It would be awesome, if there would be another button (created by this extension) next to it, to merge as soon as all checks have passed.
If the form is just hidden but in the dom, then it’s feasible. It can be un-hidden and the checkbox would have to be readonly
I just came across this exact problem too :(
I took a look at the DOM and the ID merge_message_field doesn't exist when Require status checks to pass before merging is checked. This is the ID of the text area with the squash message.
So it looks like it would be slightly harder than just unhiding it :(
This feature would be ✨
I'm adding some gasoline by noting that Azure DevOps currently provides this functionality
To add this feature, we'd have to recreate the whole form, including the logic between merging types. Additionally, we can't know which merging types are available (e.g. a repo only allows Squashing). Sounds impossible 🙁
I think you should ask GitHub to implement our wait-for-build feature (regardless of this requirement)
Adds the option to wait for checks when merging a PR
fwiw this works for me as a browser snippet
function autoMerge() {
const btn = window.$$('button').filter(el=>el.innerText === 'Merge pull request')[0]
if (btn) {
btn.click()
const btn2 = window.$$('button').filter(btn=>btn.innerText === 'Confirm merge')[0];
if (btn2) {
btn2.click()
return
}
}
setTimeout(autoMerge, 1000)
}
window.$$ = $$;
autoMerge();
That only works if you're fine with the default commit title and message. It's a good workaround but probably most people would expect to be able to preview/edit them.
@fregante can you add a little message next to the merge button that explains why the feature isn't working? I thought there was an issue with GHR originally
this is now a build-in feature:
https://github.blog/2020-12-08-new-from-universe-2020-dark-mode-github-sponsors-for-companies-and-more/
Auto-merge pull requests (https://github.com/github/roadmap/issues/107): when using protected branches. Rolling out over the next couple of weeks, enabled in your repo settings.
Thanks for the update!
That’s completely different from “Auto-merging” as intended by this extension (i.e. the user still has to click Merge whereas GitHub’s new feature will be completely automatic) but it’s probably what most people are after 🎉
That’s completely different from “Auto-merging” as intended by this extension (i.e. the user still has to click Merge whereas GitHub’s new feature will be completely automatic)
this isn't true. You still have to tick the "merge when pipeline passes" checkbox
Indeed, I think that changed since it came out of beta.
Too bad GitHub decided to make this a per-repo option rather than just allowing it on every repo like Refined GitHub does.
@fregante perhaps GitHub refined should automatically enable this feature on all repositories the user has access to on first install?
That's not something the extension should do. A cli tool would be better suited to do batch changes across multiple repos.
|
GITHUB_ARCHIVE
|
Facts About programming homework help Revealed
The line concerning a language and its core library differs from language to language. In some instances, the language designers might handle the library to be a individual entity in the language. Having said that, a language's core library is commonly addressed as Section of the language by its end users, and several language specs even have to have that this library be designed readily available in all implementations. In fact, some languages are made so the meanings of specific syntactic constructs are not able to even be explained with out referring to the Main library.
An outline of your actions of a translator for your language (e.g., the C++ and Fortran specs). The syntax and semantics of your language ought to be inferred from this description, which may be prepared in all-natural or a proper language.
Running clients just got less complicated. A well-created process determined by Java will empower you to control your consumers with wonderful relieve and grace.
Massive enterprises and little-scale startups dealing in Attributes will be able to retain a databases that has all information relevant to each and every home obtainable for sale or lease. That is one of the better simple project Suggestions.
Coding can also be utilised to figure out the most fitted Resolution. Coding also can help to speak thoughts about programming difficulties. A programmer coping with a complex programming trouble, or obtaining it hard to clarify the solution to fellow programmers, could code it inside a simplified manner and utilize the code to exhibit what he or she signifies.
It is normally accepted that an entire specification for just a programming language consists of a description, perhaps idealized, of a equipment or processor for that language. In many functional contexts, a programming language consists of a pc; For that reason, programming languages are often defined and examined in this manner. Programming languages vary from normal languages in that pure languages are only utilized for conversation concerning people, whilst programming languages also let people to speak Directions to equipment.
Scheduling, managing and designing are identified as out explicitly to counter statements that XP will not aid Individuals actions.
It is actually tough to select which programming languages are most widely used, and what utilization indicates may differ by context. Just one language may well occupy the bigger amount of programmer several hours, a distinct just one have more strains of code, and a third may take in quite possibly the most CPU time. Some languages are very fashionable for particular styles of purposes.
ALGOL refined both equally structured procedural programming as well as discipline of language specification; the "Revised Report over the Algorithmic Language ALGOL 60" grew to become a model for how afterwards language technical specs were being created.
Mr. Sarfaraj Alam aka Sam is astounding with any kind of programming assignments. You identify any language C, C++, JAVA, Matlab, C#, Website Software, Database, Facts Structure, Sport, Animation, etcetera. As mentioned I did all my assignments all over my semester And that i obtained over ninety eight or maybe more which happens to be an A in every single assignments I gave to Mr. Sam, He helped me in all the assignments. I applied lots useful link of online services for my assignments in advance of Nevertheless they ended up impolite and no clarity on how the perform will likely be completed, no real client service or no actual interaction until finally I discovered about Sam. I referred to as him the quite initially time and asked his functionality And just how he will work finishing an assignment, I used to be in no way pleased as I am today, I am nevertheless employing his services for my Projects, assignments, and so on. I felt I'm talking to my Close friend and we bond a marriage into a actual fantastic friendship.
Among the finest techniques for handling crowd at a health and fitness center. Administration can deal with people today nicely which has a technique that maintains the document of all men and women making the most of entry to the services.
Also, You can find the danger of micro-management by a non-technological agent wanting to dictate using technical program capabilities and architecture.
Pair programming lets team members to share issues and remedies swiftly creating them less likely to get hidden agendas from one another.
I might give my assignments per day before and he would anyhow do it without any hesitations and I'd even now get full rating on my Projects and Assignments. I am in fact an incredibly occupied particular person Doing the job and heading to high school is basically stressful, but when Sam is there it is possible to slumber incredibly peacefully, without any rigidity. He is very welcoming and would realize your preferences, urgency and excellent from the do the job as per your preferences. I read through the recommendations and folks ended up complaining about the costs he fees, I might say if you need to Obtain your function done in only one working day who'd want to make it happen? No-one but Sam, and the standard is one hundred%. In my opinion I might really advise his services, remember to check with him and he will get through your assignments as with entire notice and error cost-free. I was troubled a pupil obtaining really hard time in my profession but making use of his services I'm near acquiring my degree Practically. Thank you so much Sam, I really enjoy your services to me.
|
OPCFW_CODE
|
You can clone with
HTTPS or Subversion.
test: add typed arrays to known globals list
small NPN doc fix
platform: fix GetFreeMemory() on 64 bits freebsd
v_free_count is defined as u_int v_free_count (struct vmmeter sys/vmmeter.h:87)
but variable info defined as unsigned long, this cause error on 64-bits systems
because higher 32 bits remain uninitialized
build: add src/v8_typed_array.cc to gyp sources list
typed arrays: fix signed/unsigned compiler warnings
typed arrays: preliminary benchmarks
typed arrays: add Float64Array
typed arrays: alias method subarray() to slice()
typed arrays: integrate plask's typed array implementation
crypto: PBKDF2 function from OpenSSL
uv: upgrade to 7f82995
Incorporate endianness into buffer.read* function names instead of pa…
…ssing in a boolean flag
test: enable simple/test-http-dns-error for `make test-uv`
test: add test for #1202, uncatchable exception on bad host name
net: defer DNS lookup error events to next tick
net.createConnection() creates a net.Socket object
and immediately calls net.Socket.connect() on it.
There are no event listeners registered yet so
defer the error event to the next tick.
Now working on v0.5.5
Bump version to v0.5.4
Upgrade libuv to 65f71a2
Upgrade V8 to v3.5.4
Upgrade libuv to d358738
Add some debug output to test-child-process-double-pipe
net_uv: resume on closed net.Socket shouldn't crash
build: .gitignore build/ directory
Fix #1497 querystring: Replace 'in' test with 'hasOwnProperty'
http: destroy socket on error
Needs further investigation, the test passed without `--use-uv`.
Fixes failing test:
net_uv: pipes don't have getsockname
net: properly export remoteAddress to user land
Fixes failing test:
test: fix logic error in test-net-remote-address-port.js
The test intended to register an 'at exit' listener
but called `process.exit()` instead.
Fix MSVS building.
Upgrade libuv to ca633920f564167158d0bb82989d842a47c27d56
node: propagate --use-uv to child processes
uv: upgrade to e8497ae
win: fix test-process-env
Remove support for setting process.env.TZ as it doesn't seem we can do it
x-platform without fixing V8.
uv: upgrade to b328e4c
uv: upgrade to b6b97f3
|
OPCFW_CODE
|
This reducer is designed to be mounted on top of a NEMA 17 standard stepper motor. It has a high reduction ratio of 96.6667:1 in a very small package (only 20mm tall, and 80mm in diameter). In the tests i concluded, reducer didn't show even the slightest signs of backlash, but i can't guarantee it won't be an issue after prolonged use. This design also eliminates the wobble of the output which can be observed in 3D printed harmonic drives, and it's required to remove that wobble if the device is to be used in an actual mechanism which needs to perform accurately/precisely. If you encounter any problems, please contact me so i can improve the design based on your input.
Basic technical data: Gear ratio: 1:96.6667
Sun gear teeth: 24
Planet gear teeth: 16
Fixed ring gear teeth: 56
Output ring gear teeth: 58
Ratio calculation: i = (1 + (fixed_T / sun_T)) * (output_T / 2) i = (1+(56/24)) * (58/2) = 96.6667 Functioning principle explanation: This system consists of 2 rings with inside gearing, one is fixed and the second is the output ring. Both planet gears are driven by the sun gear, and engage both rings at the same time.Each revolution of the planet carrier (around it's own axis) will move the outer ring by 2 teeth (in the direction of the motor). This is achieved by adding 2 extra teeth to the output ring, and then re-adjusting the outer ring module to align it's pitch diameter to be aproximately the same as one of the fixed ring. This must be done to ensure proper teeth meshing.
Versions: There is an educational version included, which will allow you to observe what's actually happening in the reducer
(print 'tr17_edu' instead of 'tr17')
There is also an alternative version of the output ring which allows for easier mounting of a M8 bolt as the output.
(print 'tr17_fullcover' instead of 'tr17', and print 'ad17')
Additional parts: You will need additional screws to assemble this reducer. Most of the holes will accept M3 screws. Few different lenghts are needed. The most important screws you should install are those that go through the planet pin on the planet carrier; it relies on the screw to strenghthen the pin. You can get away with excluding some of the screws, but use common sense.
NOTES: If you plan to use this as a part of a functional mechanism, use plenty of grease or other lubricant! Do NOT load the output ring directly! Use an external bearing to prevent wear of the reducer! You can however use this as a direct source of torque without using external bearings, just keep in mind to not exceed the ratio*motor holding torque, or else the drive could slip (it could slip under much lower torque, but it's hard for me to test it out because of the high ratio; it's more likely that the plastic will deform before the drive slips)
|
OPCFW_CODE
|
As an experienced systems integrator, EDI2XML has completed many successful integrations across diverse industries. Based on our extensive experience, in this article, we want to discuss what the various integration methods available to businesses to integrate with Oracle JD Edwards.
We’ll keep it simple and give you a basic understanding of each method. Our goal is to give you the knowledge you need to make integration easier for your business. In future articles, we’ll dive deeper into each method to help you understand them better.
What is Oracle JD Edwards?
Oracle JD Edwards, or JD Edwards EnterpriseOne, or just JDE is a comprehensive suite of enterprise resource planning (ERP) software that helps businesses manage their financial, operational, and human resources processes.
Oracle JD Edwards utilizes cutting-edge technologies like cloud computing, mobile platforms, the Internet of Things (IoT), artificial intelligence (AI), and machine learning (ML) to create smart solutions and make users’ experience better.
In the past, Oracle JD Edwards went by various names like OneWorld, B732, Xe, and 8.98, each representing different software versions. However, its current and widely recognized name is Oracle JD Edwards EnterpriseOne. People may also refer to it informally as JDE, JDEdwards, JDE E1, or OJDE, which are typically shortened versions or acronyms based on its official name.
USEFUL READING: Oracle JD Edwards Integration: The Key to Digital Transformation
While Oracle JD Edwards provides over 80 configurable modules to meet specific customer needs, it may not address all enterprise application requirements. Nevertheless, it seamlessly integrates with other business applications and systems, offering various interaction and integration patterns for optimal integration.
Understanding JDE Integration
Oracle JD Edwards (JDE) EnterpriseOne offers various integration methods that enable the system to interact with external applications, databases, and services.
Thus, integrating JDE with other systems involves linking its robust functionalities with diverse software applications like CRM, ERP, HRM, EDI, and more. This integration facilitates data exchange, automation, and real-time insights, enhancing overall business performance.
Integration Methods Comparison
Here’s a list outlining different methods of Oracle JD Edwards (JDE) integration, including JDE Orchestrator, Direct Database integration, Dynamic Java Connector, and Magic xpi.
JDE Orchestrator is a tool that enables you to create, test, and deploy REST services that interact with JD Edwards EnterpriseOne applications and data.
- Easy to use and configure
- Supports real-time integration
- Supports JSON and XML formats
- Supports IoT devices and cloud services
Direct Database Integration
Integrating external systems directly with the JD Edwards database, typically using SQL queries and database connections.
- Fast and simple
- Supports batch and real-time integration
- Supports any database format
- No additional software required
We have the expertise and the tools to help you integrate JDE with other systems in the most efficient and effective way. Book a free consultation now
Dynamic Java Connector
A tool that enables you to create Java classes that interact with JDE business functions using the Java Connector Architecture (JCA) specification.
- Supports real-time integration
- Supports complex business logic
- Supports any Java-based platform or application
- Supports transactions and security
Z- Table Processing
The “Z tables” are transition tables in SQL/Oracle, whose sole objective is to load data into JDE. In general, this way is used for “Bulk import” of data into the JDE database.
- Can be done using various methods, such as flat files (CSV, TXT, etc.) to import and export data between JDE and external systems.
- In general, used for “Bulk import” of data into the JDE database.
- Z-table integration can be used for both inbound and outbound transactions
Business Services Server (BSSV)
BSSV is a component within the JDE system that facilitates communication and integration between JD Edwards applications and external systems.
- Supports SOAP-based and RESTful services for integration.
- BSSV efficiently manages high volumes of transactions, ideal for enterprise-level deployments
Application Interface Services (AIS)
AIS is a component of Oracle JDE that provides a lightweight RESTful interface that allows JDE applications to run on various devices such as smartphones and tablets through REST API calls.
- Acts as a middleware layer, allowing external systems, mobile applications, and other software solutions to interact with JDE.
EDI (Electronic Data Interchange)
EDI is one of the integration methods with Oracle JDE. JDE has a data interface system (code 47) that acts as a staging area for moving data in and out of the application systems using EDI standard formats.
- Supports various EDI standard formats, such as X12, EDIFACT, HL7, and others
- Enables JDE to communicate with external systems.
Magic xpi is an integration platform designed for connecting various enterprise applications, including JD Edwards, through a visual drag-and-drop interface. It has an Oracle certified connector to JDE, that can interact directly with JDE Business Services and its Dynamic Java Connector.
- Easy to use and maintain.
- Supports batch and real-time integration.
- Supports multiple protocols and formats.
- Supports cloud and on-premises. deployment
- Visual mapping, drag and drop from source to destination.
These methods can be used separately or in combination, depending on the complexity and requirements of the integration. It’s also important to note that integration strategies should be selected and implemented in a way that suits the organization’s IT policies, security standards, and performance requirements.
Conclusion: Oracle JDE Integration
Integrating JD Edwards (JDE) with other systems is crucial for streamlining business processes and maximizing operational efficiency. Each method offers unique advantages and capabilities, enabling seamless data exchange and collaboration across your organization and beyond.
If you’re ready to explore the possibilities of JDE integration or have questions about implementing these methods in your organization, our team is here to help.
Book your free consultation today to learn more and start optimizing your JD Edwards system for enhanced performance and productivity.
We’d love to hear from you! What integration methods have you found most effective in your JDE implementation? Do you have any questions or suggestions regarding JDE integration? Leave a comment below and let’s continue the conversation!
|
OPCFW_CODE
|
How to delete through the VIEW?
I use Microsoft SQL Server 2017 Management Studio.
I have these tables:
alter table dob
(
PIB int primary key,
naziv nchar (10) not null,
broj_racuna int
)
alter table ddob
(
PIB int primary key,
tel nchar (10) not null,
MB int,
adr nchar (10)
)
PIB is foreign key to table dob
I created a view dd_all:
SELECT D.PIB, D.naziv, D.broj_racuna, DD.telefon, DD.MB, DD.adresa
FROM dbo.dob AS D
INNER JOIN dbo.ddo AS DD ON D.PIB = DD.PIB
I need a trigger: when I delete something from the view, that trigger needs to delete it from dob and in ddob.
I tried with this:
CREATE TRIGGER trigg_1
INSTEAD OF DELETE
AS
BEGIN
DECLARE @pib_delete int
SELECT @pib_delete = PIB
FROM dob
WHERE dob.PIB = @pib_delete
DELETE FROM dobavljac_sve
WHERE dobavljac_sve.PIB = @pib_delete
DELETE FROM dobavljac
WHERE dobavljac.PIB = @pib_delete
END
Also:
declare @pib_delete int
select @pib_delete = PIB from dobavljac_sve where dobavljac_sve.PIB=@pib_delete
delete from dobavljac_sve where dobavljac_sve.PIB=@pib_delete
delete from dobavljac_detalji where dobavljac_detalji.PIB=@pib_delete
delete from dobavljac where dobavljac.PIB=@pib_delet
No one can really help when you post abbreviated DDL for 2 tables but a trigger that references 3 tables (2 of which are not included in your DDL). In addition, you suffer from a very common mistake. You assume one row is affected by a delete statement in your trigger - a false assumption. Go find discussions about triggers for beginners to understand that problem.
The trigger would need to look something like this:
begin
delete d
from ddob d
where d.pib in (select dd.pib from deleted dd);
delete d
from dob d
where d.pib in (select dd.pib from deleted dd);
end;
error message: view or function 'd' is not updatable because the modification affects multiple base tables
@BeginnerCoding . . . I fixed the table names.
again error: the row value(s) updated or deleted either do not make the row unique or they alter multiple rows(3 rows) :/
dob is the parent table, so you need to delete "bottom up" if the FK is not set to cascade.
tried to swap places but gives error the row value(s) updated or deleted either do not make the row unique or they alter multiple rows(3 rows)
ddob should be the parent table. Note OP statement PIB is foreign key to table dob. There is nothing wrong with the trigger posted by Gordon. The error must be due to something else. @BeginnerCoding please show us your delete statement (please update your question with that delete statement)
|
STACK_EXCHANGE
|
This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .
Windows 10 devices currently don’t have WebView2 Runtime pre-installed. This means applications that want to leverage the new WebView2 controls, such as the upcoming new version of Take a Test, are unable to until WebView2 Runtime is installed. IT admins can manually download Runtime from WebView2 - Microsoft Edge Developer and install it as administrator on each device. For Intune admins looking for an automated way to deploy WebView2 Runtime, refer to the following guidelines which leverages Win32 app management capabilities.
Note: For the Take a Test app, if the WebView2 Runtime is not installed on a device, the app will fall back to using legacy System XAML WebView controls.
- Download the Evergreen standalone installer from WebView2 - Microsoft Edge Developer. For this guide, we downloaded the x64 standalone installer.
- Create a separate folder and place the downloaded installer file in it (example folder name: Runtime download). Make sure there are no other files in this folder.
- Create a new folder for the .intunewin file you will create, which is needed to deploy Win32 apps (example folder name: Intunewin destination).
- Download the Microsoft Win32 Content Prep Tool from GitHub - Microsoft-Win32-Content-Prep-Tool. This tool is used to wrap the Win32 app so that it can then be uploaded to Intune. Unzip the file and then copy the file path to the folder containing IntuneWinAppUtil.exe.
- Open the Command Prompt and type cd followed by the path name to the folder containing IntuneWinAppUtil.exe. Then run the file.
- Type in the information for the source folder (Step 2), setup file, and output folder (Step 3).
- Open Intune and navigate to Apps > Add and select Windows app (Win32) as the App type.
- Select MicrosoftEdgeWebView2RuntimeInstallerX64.intunewin from the output folder you created.
- For the App information, populate the fields referencing the image below. Most fields should be auto populated. You may fill in Microsoft as the publisher.
- On the Program tab, provide the install and uninstall commands.
The install command is:
MicrosoftEdgeWebView2RuntimeInstallerX64.exe /silent /install
The uninstall command is:
%programfiles(x86)%\Microsoft\EdgeWebView\Application\107.0.1418.35\Installer\setup.exe --force-uninstall --uninstall --msedgewebview --system-level --verbose-logging
Note that the uninstall prompt is version specific, so make sure you check the version of your Runtime.
- On the Requirements tab, select 64-bit and a minimum operating system.
- On the Detection rules tab, manually configure detection rules to check if Runtime is already installed on the device by checking the registry. If Runtime is already present, then nothing will happen.
The Key path to check for is:
If you are installing Runtime for a 32-bit Windows, the Key Path is:
- In the assignments tab, select the devices that you want to target the WebView2 Runtime to be installed on. If Runtime is already installed on the targeted devices, no action will occur. Once the WebView2 Runtime installation is complete (check with Device install status), it will enable Take a Test to leverage WebView2.
- Prepare a Win32 app to be uploaded to Microsoft Intune
- Add and assign Win32 apps to Microsoft Intune
- Distribute your app and the WebView2 Runtime
If you have any questions, please leave a comment below or reach out to us on Twitter @IntuneSuppTeam.
|
OPCFW_CODE
|
If you are looking for a way to download SQL Server database version 655, you might be confused by the different versions and editions of SQL Server available. SQL Server database version 655 refers to the internal database version number of SQL Server 2008, which is different from SQL Server 2008 R2. SQL Server 2008 R2 has an internal database version number of 661, and it is not compatible with SQL Server 2008. You cannot attach or restore a SQL Server 2008 R2 database to a SQL Server 2008 instance.
So, how can you download SQL Server database version 655 There are two options: you can either download SQL Server 2008 Express, which is a free edition of SQL Server, or you can download SQL Server 2008 Developer, which is a full-featured edition licensed for development and testing purposes only. Both editions have the same internal database version number of 655.
To download SQL Server 2008 Express, you can visit the Microsoft Download Center and choose the appropriate language and platform for your system. You will need to have Windows Installer 4.5 and .NET Framework 3.5 SP1 installed on your computer before installing SQL Server 2008 Express. You can also download the SQL Server Management Studio Express, which is a graphical tool for managing your databases.
To download SQL Server 2008 Developer, you will need to have a Visual Studio subscription or a MSDN subscription. You can access the Visual Studio Downloads page and search for \"SQL Server 2008\". You will find the SQL Server 2008 Developer edition under the \"SQL Server\" category. You can also download the SQL Server Management Studio, which is a more advanced tool for managing your databases.
Once you have downloaded and installed the SQL Server database version 655 of your choice, you can create, attach, restore, and backup your databases using the tools provided. You can also connect to your databases using various drivers and connectors for different programming languages and platforms.
Another feature of SQL Server 2008 that can help you manage your databases more efficiently is Policy-Based Management. This feature allows you to define and enforce rules or policies for your SQL Server instances, such as naming standards, security settings, backup schedules, and more. You can create policies using a graphical interface or a declarative language, and apply them to one or more SQL Server objects. You can also evaluate the compliance of your SQL Server instances with the policies, and generate reports or alerts for any violations.
If you are working with geospatial data, such as locations, shapes, routes, or boundaries, you will appreciate the new geospatial data types and functions introduced in SQL Server 2008. These data types are geometry and geography, and they allow you to store and manipulate spatial data in your databases. You can also use various methods and functions to perform operations on spatial data, such as calculating distances, intersections, areas, buffers, and more. SQL Server 2008 supports the Open Geospatial Consortium (OGC) standards for spatial data.
One of the challenges of storing large amounts of data is the disk space consumption and the performance impact. SQL Server 2008 offers a solution for this problem with Table Compression. This feature enables you to compress your tables and indexes to reduce their size and improve their performance. You can choose between row-level compression, which reduces the storage of fixed-length data types, or page-level compression, which also eliminates repeated values within a page. Table Compression can save you disk space, reduce I/O operations, and improve query performance. 061ffe29dd
|
OPCFW_CODE
|
Table of Contents
- 1 How do I fix request entity is too large?
- 2 How do I fix request entity is too large in Chrome?
- 3 What does it mean Request Entity Too Large?
- 4 What does HTTP Error 413 mean?
- 5 How do you stop Error 413?
- 6 What is a 412 error?
- 7 How do you fix a URL that is too long?
- 8 How do I fix HTTP Error 414 the request URL is too long?
- 9 What is a Web URI?
- 10 Are long URLs good?
- 11 Why is my URL so long?
- 12 Why do links have random numbers?
- 13 How do I change my funnel URL in Clickfunnels?
- 14 What is the question mark in a URL?
- 15 What characters are allowed in URL?
- 16 What does means in a URL?
- 17 How do I find the parameters in a URL?
- 18 What is build a URL parameter?
How do I fix request entity is too large?
Method to Fix the Entity Too Large Error
- Increasing upload file size by function file in cPanel.
- By increasing upload file size with . htacces file.
- By increasing upload file size via WordPress file.
- Edit the upload file size using the php. ini file.
- Manually upload the file via FTP.
How do I fix request entity is too large in Chrome?
- Click the three vertical dots next to your profile icon, then click Settings.
- Click Privacy and security.
- Click Cookies and other site data.
- Click See all cookies and site data.
- Search for “constantcontact.com.”
- Click Remove All Shown.
- Click Clear all.
What does it mean Request Entity Too Large?
A 413 Request Entity Too Large error occurs when a request made from a client is too large to be processed by the web server. An example request, that may cause this error would be if a client was trying to upload a large file to the server (e.g. a large media file). …
How do I fix request entity too large in PHP?
Modify functions. php
- In your cPanel menu, select File Manager under Files.
- Navigate to the folder of your current theme inside your root WordPress directory (public_html by default). Open this theme file.
- Select functions. php and click the Edit icon.
- Copy the code below and paste it at the end of the file.
- Click Save.
How do I fix Nginx 413 Request Entity Too Large?
Error: 413 “Request Entity Too Large” in Nginx with “client_max_body_size” / Changes in Nginx config file.
- Step 1: Connect to your nginx server with your terminal.
- Step 2: Go to the config location and open it.
- Step 3: Search for this variable: client_max_body_size .
- Step 5: Restart nginx to apply the changes.
What does HTTP Error 413 mean?
Payload Too Large response status code
How do you stop Error 413?
Fixing 413 Request Entity Too Large Error in WordPress
- Method 1. Increase Upload File Size Limit via Functions File.
- Method 2. Increase Upload File Size Limit via .htacces File.
- Method 3. Manually Upload File via FTP.
What is a 412 error?
The HyperText Transfer Protocol (HTTP) 412 Precondition Failed client error response code indicates that access to the target resource has been denied.
What is a 414 error?
The HTTP 414 URI Too Long response status code indicates that the URI requested by the client is longer than the server is willing to interpret. or when the server is under attack by a client attempting to exploit potential security holes.
How do you fix 414?
How To Fix 414 Request URI Too Large in Apache
- Open Apache Configuration File. Apache configuration file is located at one of the following locations, depending on your Linux distribution.
- Increase URI limit. To fix 414 Request URI too large error, you need to set LimitRequestLine directive.
- Restart Apache web server.
How do you fix a URL that is too long?
How To Fix. There are several things that you can do to avoid URLs that are too long: If using dynamic URLs with URL parameters, use server-side URL rewrites to convert them into static, human-readable URLs. Try to minimize the number of parameters in the URL whenever possible.
How do I fix HTTP Error 414 the request URL is too long?
If one of your scripts runs a big operation on the server and returns a URL which is too long, you will get a 414 error meaning that the request’s URL was too long (“The requested URL’s length exceeds the capacity limit for this server”). Indeed, the Apache server’s 8190 bytes limit applies to these addresses.
What is a Web URI?
A Uniform Resource Identifier (URI) is a unique sequence of characters that identifies a logical or physical resource used by web technologies. Other URIs provide only a unique name, without a means of locating or retrieving the resource or information about it, these are Uniform Resource Names (URNs).
How do I fix 414 Request URI too large Nginx?
HTTP 414 request-URI too large This can also be handled in the similar manner as HTTP 413 error. To handle this we have to modify large_client_header_buffers parameter in the server configuration. As mentioned in the documentation, default size of large_client_header_buffers is 8 KB.
How long is too long for a domain name?
15 to 17 characters
Are long URLs good?
But when you really break it all down, deciding on the length of a URL is quite simple. The shorter the better. According to Backlinko, “Shorter URLs tend to rank better than long URLs.” To prove this, they performed some extensive testing on one million Google search results.
Why is my URL so long?
Have you ever noticed how long these Web addresses are? Ever wonder why these URL are so long? The answer is simple: tracking codes. Tracking codes are strings of text added to the end of a URL that let you track the source of a click.
These identifiers are long random strings of letters and numbers in order to make sure that there are no duplicates. For example, every video n YouTube has a random identifier to distinguish it from all the other videos, and if you copy a link to the video it will contain this identifier.
How are URLs generated?
Dynamic URLs or dynamic sites are generated at the moment a user submits a search query. Unlike static websites, they are not stored as a whole on the relevant server, but are generated with the stored data on the server and an application.
Why do links have random letters?
When you see strings of apparently random letters/numbers in a URL it’s often for security purposes, though not always. Basically when the page is created the site software creates a random string of characters and inserts it into the URL. This keeps people from guessing where the page is located on the server.
How do I change my funnel URL in Clickfunnels?
The Funnel Step URL is specific to the pages/steps in your funnel.
- From within your funnel, select the step in the funnel that you would like to update.
- Click on the Gear icon to the left of the funnel step URL.
- Update the Path.
- Click on Update Funnel Step.
What is the question mark in a URL?
The question mark (“?”, ASCII 3F hex) is used to delimit the boundary between the URI of a queryable object, and a set of words used to express a query on that object. When this form is used, the combined URI stands for the object which results from the query being applied to the original object.
What characters are allowed in URL?
A URL is composed from a limited set of characters belonging to the US-ASCII character set. These characters include digits (0-9), letters(A-Z, a-z), and a few special characters ( “-” , “.” , “_” , “~” ).
How do you get a question mark out of a URL?
This is used in URLs to encode/escape other characters. It should also be encoded….URL Encoding of Special Characters.
|Character||Code Points (Hexadecimal)||Code Points (Decimal)|
|Question mark (“?”)||3F||63|
|‘At’ symbol (“@”)||40||64|
What is URL query?
A query string is a part of a uniform resource locator (URL) that assigns values to specified parameters. A query string commonly includes fields added to a base URL by a Web browser or other client application, for example as part of an HTML form.
What does means in a URL?
Uniform Resource Locator
How do I find the parameters in a URL?
To identify a parameter in a URL, you need to look for a question mark and an equals symbol within a URL. In this case, the “?” denotes the start of the parameter. The term “productid” is in of itself the parameter and in this case is designated as a product ID number.
What is build a URL parameter?
Simply put, URL parameters consist of ‘tags’. You can append these ‘tags’ that contain campaign information to any URL. You can use URL Builders to generate these or create them manually.
What is a page parameter?
A page parameter is a named value that one page can pass on to another. A page parameter can be set to a component property, another parameter, or a constant value. Page parameters are typically appended to a URL as query strings or are submitted with a form.
|
OPCFW_CODE
|
Is this a virtual environment or physical. If it's virtual create snapshots of both DC's, if physical use your backup software and create a full backup in case of failure(I would recommend testing the restore process to ensure the recovery) After that you need to find out the domain functional level is it running Windows Server 2000 mixed or Windows Server 2003 native, etc...In order to migrate to Win2k8 R2 as DC's you need to be running in at least Windows Server 2003 native mode. You can find this out by opening AD Domains and Trusts and right click on your domain and select "Raise domain functional level". You can raise the level to Windows Server 2003(but once you do this you can't go back). After you raise the functional level, I would just use ntdsutil to find out the "FSMO roles" that each DC holds(you can google what I have quotes, you can also use MMC to find out all the information). Next you need to update the schema master on the 2003 DC's which can be done by using adprep32.exe /forestprep from the support/adprep off the 2008R2 CD. If everything successfully updates run adprep32 /domainprep, adprep32 /domainprep /gpprep and if you plan on having read-only DC's adprep32 /rodcprep. Again if you have no bumps in the road and everything was successful. You can prep your 2008R2 member server to be a domain controller. So on the 2008 R2 run dcpromo and join it to the existing forest, make sure you make it a DNS and Global Catalog server. After your server(s) are now DC's you can safely start to transfer over roles to the new DC's. This is where you can go back to the FMSO roles and see which servers had which functions and transfer those over to the new servers. For instance if your schema and Domain were owned by test-dc1 - transfer those roles to test2k8R2-dc1. Likewise if the RID, PDC, Infrastructure were owned by test-dc2 you can transfer those roles…
IT 221 REPORT #1
HYPER V FOR WINDOWS SERVER 2008
The Hyper V role enables you to create and manage a virtualized server computing environment by using a technology that is part of Windows Server 2008. The improvements to Hyper-V include new live migration functionality, support for dynamic virtual machine storage, and enhancements to processor and networking support.
Live migration functionality allows you to transparently move running virtual machines from one node of the failover cluster to…
Microsoft Server Product Portfolio
Customer Solution Case Study
Harrods Uses Intranet Technology to Improve Workforce Communications
“We decided to use the Microsoft SharePoint Server 2010 intranet architecture because it offered so many of the features we needed.”
Kieron Bissett, People Management Applications Manager, Harrods
Leading retailer Harrods needed a collaboration platform for its 8,000 staff in the United Kingdom (U.K.) to improve processes. It worked with Microsoft…
CompTIA Server+ (2009 Edition) Certification
The CompTIA Server+ (2009 Edition) certification is an international vendor neutral
credential. The Server+ exam is a validation of “foundation” level server skills and
knowledge, and is used by organizations and IT professionals around the globe.
The skills and knowledge measured by this examination are derived from an industrywide Job Task Analysis (JTA) and were validated through a global survey in…
Where the cloud meets the ground
Oct 23rd 2008
From The Economist print edition
Data centres are quickly evolving into service factories
Correction to this article
IT IS almost as easy as plugging in a laser printer. Up to 2,500
servers—in essence, souped-up personal computers—are crammed into a
40-foot (13-metre) shipping container. A truck places the container inside
a bare steel-and-concrete building. Workers…
With a host of new features to Microsoft Windows Server 2008, I believe that this utility will utilize Wingtips Toys IT investment more efficiently. Combining this new OS with powerful computer hardware and services solutions can result in a tremendous productivity boost from:
• Enhanced virtualization features that help you increase system availability
• Streamlined management over your remote systems
• Improved security to help ensure the confidentiality, integrity and…
Today, I will install Windows Server 2008 in a virtual machine. I will also install Active Directory in the server. I will then create three user accounts with different levels of authority, create three groups and three computer accounts all within the Windows Server 2008.
My plan is to walk you through the process with descriptions and snapshots of what is being done or configured.
I will first Start VMWare Workstation to start the server installation
You then select…
Microsoft SQL Server is an incredible relational database management system developed by Microsoft. It offers an excellent mix of performance, reliability, ease of administration, and new architectural options, yet enables the developer or DBA to control minute details when desired. SQL Server is a dream system for a database developer. There are at least a dozen different editions of Microsoft SQL Server aimed at different audiences and for different workloads (ranging from small applications that…
discuss one by
one the major components of a VICIdial system and how they work
together as a solution to your needs.
We will be tackling installation and configuration from scratch
using Ubuntu Server 8.0.4 LTS as my choice of distribution mainly
because most of my deployments are on Ubuntu Server. And yes we
will be installing from scratch.
At the end of this document you will be able to have an
understanding of how vicidial works, how to install it and how to start
Assignment 1. Client Server Configuration
I would recommend a SQL Client/Server database system.
Client/server systems are constructed so that the database can reside on a
central computer, known as a server, and be shared among several users. Users
access the server through a client or server application:
In a twotier client/server system, users run an application on their local
computer, known as a client, which connects over a network to the server
of Windows Server 2008 Server Core and Virtual Servers
Windows Server Core offers a number of benefits, regardless of its intended use reduced maintenance. By default, a Windows Server Core system has very few binaries installed. When a role is added, only the components that are necessary for the role are installed. The binaries are still present on the system, which allows for those components to be updated during normal patch cycles. No longer will your Windows Servers need updates…
|
OPCFW_CODE
|
How to start an official email to a professor
I want to send an email to a professor to inform him that I want to accept the offer of PhD admission and Research assistantship. But I do not know how to start email. Is the following good? if not, would you please recommend better phrase.
Dear Professor,
I hope you are keeping well ....
You'd want to include at least a name after "Professor," I'd think!
How about this? "Hello Dr. Genius, I gratefully accept your offer of admission and research assistantship, and I look forward to working with you. Regards, user11259"
This should be the least hard thing about graduate school. They won't take it away based on one email, simple is good.
Congratulations on finding a PhD position :) When sending an acceptance e-mail to a professor, I would say the "general" rules of communicating with professors via e-mail apply.
Be polite, and extend thanks when appropriate, but don't overdo the social pleasantries.
The polite part basically holds for any e-mail and live correspondence.
I would say that "I hope you are keeping well" would be a bit unnecessary.
Be short and to the point. Respect the professors time.
Professors receive many e-mails a day, and are usually very busy. Saying what you want to say clearly not only shows respect for their time, but also makes it more likely that everything written will be carefully read.
Do at least a basic spellcheck.
Especially in the beginning of a communication. It doesn't cost much time and effort, but could leave a bad impression.
Do not attach big files unless explicitly asked for.
Anything larger than a few MB should not be sent unless asked for (e.g. sending all your credentials / application papers when first contacting somebody is bad), and especially not to somebody who you are not collaborating with at the moment.
Beyond this, there is not much else. I already feel like it's hard to call any of these rules, they're just guides based on common sense. Just say what you need to say. I put and extended the comment by @user11259 here as an example:
Hello Dr./Prof. Brain,
I gratefully accept your offer of admission and research assistantship, and I look forward to working with you.
(Please let me know about the next steps in the admission process I need to take.)
Sincerely,
user11259
This is very culture and person specific. My acceptance email was (translated from Dutch):
Harry:
Sure. When?
-- Maarten
I do not reccommend that as a general style, but it was the right response in my specific case, as in the Netherlands (academic) titles are less important than in other countries and I knew that Harry (my advisor) was even more extreme in his insistence on informal and brief/terse communication.
Almost everyone who interviews attempts to present themselves, their accomplishments, and their personal objectives politely and submissively to the interviewer.
Dear Professor,
I hope you are keeping well ....
Hello Dr./Prof. Brain,
I gratefully accept your offer of admission and research assistantship, and I look forward to working with you.
When you write "I hope you are keeping well" or "I gratefully accept," you are subordinating yourself to your "superior."
This attitude is not "academic," it is playing power politics or submitting to colonialism.
The first thing you must do is read everything you can get your hands on your professor has published. Then select any aspect of the professor's writing that jumps up at you. Find something Professor Brain has written, and ask her one or more questions about her assertions. SHE MUST RESPOND--SHE CANNOT HELP BUT RESPOND BECAUSE YOU ARE ACTIVATING THE KEYS TO HER PSYCHE.
When Professor Brain responds, find a term or concept she mentions; and ask Professor Brain to elaborate.
After you have had a continuing conversation with Professor Brain for several e-mail messages back and forth, you can discuss any procedural issues you like. But at that point in the discussion, you will have established where you stand, who you are, and how much you care. Do it properly, and YOU WILL BECOME THE SUPERIOR.
There's no need to act subordinate, but I don't see a problem with saying "I gratefully accept" or "I hope you are keeping well" (and I certainly don't see how either one could be considered submitting to colonialism). What you describe comes across as attempted manipulation, and I see it as at best pointless and quite possibly counterproductive.
Not to forget useless... Academics usually take around a week to answer important emails, an exchange of pleasantries will be sent to the bottom of the pit of forgoteness.
I hope that is just trolling and not a serious answer...
I regret that I only have one downvote to give here.
|
STACK_EXCHANGE
|
It’s Saturday, December 9, 2023 and 66°F in Austin, Texas
Adobe Drops Mobile Flash Player Dev
Was Steve Jobs right? A bad fit for mobile devices?
Adobe announced this past week that they would no longer develop new Flash players for browsers on mobile devices, and they would instead concentrate on developing HTML5 tools and standards.
Having an iPhone myself, I'll guess we'll never know how a Flash player would have performed on Apple's hardware. It is likely true that highly interactive Flash elements would have been processor intensive and a battery drain. Given the wide variety of hardware specs on mobile devices, it would have been difficult to determine how your Flash application/element would have performed across those devices.
With the current web focus on SEO and seachability, entirely Flash websites have been out of vogue for quite some time. Those type of websites were also generally extremely time consuming to create and make changes to.
The one area of web design and development that Flash really excelled at was as a universal player of streaming video content. For years, web developers struggled with ways to deliver video to end users. The problem was that there wasn't any universal video player broadly distributed and pre-installed across computer operating systems and browsers. Additionally, the player installed generally determined the video format to use. Prior to the introduction of Flash video streaming capabilities, any videos on websites would require instructions about how the user needed to install QuickTime or Real Media if they wanted to view the video. The Read Media program in particular became a piece of bloatware with lots of popups and advertising. The Windows Media files weren't generally usable on Apple computers. And many of the formats and players required special server hardware to stream the videos effectively, or would require the end user to wait for the entire large video file to download.
Adobe's Flash player essentially solved all those headaches, and it also provided a customizable player with potential for digital rights management. Essentially one video file could be created and served to users on multiple platforms, generally without the need for them to install any additional software. Technically the Flash video files are actually container files with video encoded using one of the supported codecs. But essentially, a web developer could "encode once" and serve video effectively to most web users. The Adobe Flash player became the de facto standard for incorporating video into a website.
The video capabilities of Adobe's Flash player enabled and fostered the creation of the widely popular video based websites like YouTube, Hulu, and Vimeo.
Steve Jobs decision not to allow Adobe's Flash player onto Apple's mobile devices like the iPhone and iPad -- threw a monkey wrench into video embedding on websites. He advocated for HTML5 video, but HTML5 is still an evolving standard and not incorporated into older browsers still used by many web visitors.
HTML5 video is still somewhat hampered by the competing video formats supported (and not supported) by the different browsers. The 3 main video formats are Ogg Theora, H.264, and VP8 (WebM).
It would be nice if a royalty-free open format like WebM were completely supported across all major browsers. This would solve most of the outstanding HTML5 video issues. But currently, that particular resolution does not seem likely.
So as it stands now, to truly implement video across most all browsers and platforms would require the creation of 3 different files:
- One WebM file for HTML5 enabled Firefox, Chrome, Opera, Konqueror
- One H.264 file for HTML5 enabled Internet Explorer and Safari
- One FLV or F4V file for Flash player to play on older non-HTML5 browsers still used by many web visitors
The creation of 3 different files and addition of fallback legacy code for non-HTML5 browsers actually has made embedding video more complicated (not less).
That is the primary reason that so many sites still use the Flash player for embedding video and haven't tried to migrate everything to the still evolving HTML5 standard just yet.
Hopefully the final HTML5 standard will simplify things again, as was the original intent.
Looking for a new website or to update an existing one? Please give us a call at 512 469-7454 and let's discuss your project and its objectives.
|
OPCFW_CODE
|
ETOOBUSY 🚀 minimal blogging for the impatient
Some notes on the
ChrootDirectorydirective for OpenSSH.
From the documentation in my system:
Specifies the pathname of a directory to chroot(2) to after authentication. At session startup sshd(8) checks that all com‐ ponents of the pathname are root-owned directories which are not writable by any other user or group.
This can be a bit annyoing, because I was expecting to be able and force a user to be allowed into a writeable directory, expecially because my main target is to pair this with an SFTP-only setup (see Setting up an SFTP server for the details).
Alas, this is not possible, to a common workaround is to create a
writeable directory inside the directory indicated with
ChrootDirectory and let the user write things there.
After the chroot, sshd(8) changes the working directory to the user’s home directory.
I can only guess that after the
chroot there’s still a reference to
the old filesystem view, which is a leak. So
sshd does the directory
change to be sure to land inside the new filesystem view.
ChrootDirectoryaccept the tokens described in the
This means that it’s possible to use a few
% placeholder to make the
path a bit more generic than a single directory. As an example,
replaced by the username and
%h by the path to their home directory
(even though this would have the restriction describe above, so it’s
usually not a viable option). Should we need a literal
The ChrootDirectory must contain the necessary files and directories to support the user’s session. For an interactive session this requires at least a shell, typically sh(1), and basic /dev nodes such as null(4), zero(4), stdin(4), stdout(4), stderr(4), and tty(4) devices.
chroot, the filesystem view provided to users is restricted
to that directory only, hence they will generally lack a lot of the
things that would be needed to login with a functional shell. Note that
adding the shell might imply the need to also provide the shared
libraries it relies upon, unless of course it’s compiled statically.
For file transfer sessions using SFTP no additional configuration of the environment is necessary if the in-process sftp-server is used, though sessions which use logging may require /dev/log inside the chroot directory on some operating systems (see sftp-server(8) for details).
Apart from the indication about logging, it’s worth remembering that the in-process sftp-server can be enabled with the following configuration:
Subsystem sftp internal-sftp
Otherwise… it would either not be configured, or point to an external program that MUST be found in the new chroot-ed filesystem (this is pretty much the same situation discussed for the shell above).
For safety, it is very important that the directory hierarchy be prevented from modification by other processes on the system (especially those outside the jail). Misconfiguration can lead to unsafe environments which sshd(8) cannot detect.
While I understand the general gist of this warning (e.g. someone might bind-mount stuff inside and open the flood gates), I’m not sure I understand the danger that might come if an external process decides to add some files in that directory. Any light in this direction would be much appreciated!
The default is
none, indicating not to chroot(2).
This pretty much seals the documentation, and makes sense as a default because anything else would mean that the administrator has to ensure the proper setup of the filesystem, which is usually not needed in the general case.
I guess it’s everything… future me!
|
OPCFW_CODE
|
As emphasized by Penrose many years ago, cosmology can only make sense if the world started in a state of exceptionally low entropy. The low entropy starting point is the ultimate reason that the universe has an arrow of time, without which the second law would not make sense. However, there is no universally accepted explanation of how the universe got into such a special state. Are there some observations that would really tell us that the early universe was with small entropy? Is this claim really consistent with our theories?
Entropy is known to be strictly increasing (in the precise sense of a positive local energy production) due to the many dissipative processes in Nature. This is probably the most thoroughly verified fact in physics.
As a consequence, the total entropy of the universe (if this term can indeed be well-defined, which is somewhat questionable) must have been much lower in the past, as in an isolated system (and the universe is by definition isolated), the total entropy increases, too.
This is independent of but consistent with current cosmological models.
On the other hand, the question why this is so is difficult to answer, Possibly the quesion is moot, as the total entropy in the universe could also be infinite, in which case it was always infinite.
For an enormously detailed but still very readable answer to this question, see Sean Carroll's book "From Eternity to Here". It's all about this subject. Carroll also has a couple of lectures up at ted.com that give the highlights.
As we look out into space, we're effectively looking back in time. When we see something 10 billion light years away, we're seeing light that left it 10 billion years ago. So by looking at objects different distances away, we can see how the universe has changed with time. There's also the cosmic microwave background radiation, which gives us the most direct information about the state of the very early universe.
The 2nd thermodynamic law is invalid when we add gravitation to an homogeneous infinite (*) ensemble of particles at rest (with temperature 0ºK).
What would you expect to happen if we add a small perturbation (a QM consequence) ?
Think of a regular crystal where all particles are equally spaced and then one particle moves away from its initial position forming a hole.
A hole will expand because all particles in the exterior are less attracted to the center of the hole then before.
The temperature WILL GROW in the mass shell exterior to the hole and the hole will grow, in an accelerated fashion.
In the real universe such holes are called VOIDS and galaxies are formed in the intersection of the voids.
In my answer to PSE-anti-gravity-in-an-infinite-lattice-of-point-masses I show the equations of the gravitational field and graphics, under this scenario.
I got downvoted there, and I expect the same now, without argumentation, typical of believers that accept no evidence on contrary to their beliefs.
(*) because the gravitational field grows at c speed it can be only 'large enough' .
|
OPCFW_CODE
|
I am an undergraduate student pursuing Computer Science and Business Administration at Washington and Lee University in Lexington, Virginia. During my time at W&L, I have worked as a Summer Research Scholar and as a STEM IT assistant as part of my work-study program. I am very hard-working, dedicated, focused, intelligent, and driven. I secured a full-ride scholarship to attend W&L.
Student & Programmer.
- Languages English, Nepali, Hindi, Sherpa
- Interest Machine Learning, Software Depelopment, and Business analysis
- Website: lakpafinjusherpa.com
- Looking For: Internship
- Email: email@example.com
I attended the last two years of high school at Pearson College UWC in British Columbia, Canada with a full-ride scholarship. I shared my two years with international students from around the world, broadening my academic knowledge and understanding of different cultures and practices. During my time at Pearson, I led a Computer Science club, worked as an IT assistant, worked as a Summer Camp leader, rode 450 km on a Bike in a week across Kootenay Pass, and organized Asia-pacific and South Asia regional programs to represent the culture from Nepal.
I am always looking for opportunities to learn, grow, and work with people who are passionate about what they are doing.
I am an active learner. I like to learn new skills, new tools, and new subject matters. Below is a subset of skills I possess which I think are relevant to present.
You can Find my latest Resume here: latest CV
Bachelor of Science
Washington and Lee University, Lexington, Virginia
Pearson College UWC, Victoria, BC, Canada
STEM IT Assistant
2021 - Present
Washington and Lee University, Computer Science Department, Lexington, Virginia, US
- Assemble Computer hardware for various science projects and image newly purchased computers regularly
- Installed patch files for log4j vulnerability on different lab machines to prevent remote security threat
- Used Deep Freeze software to freeze computers across the STEM departments after the installation of new software
- Installed software such as R Studio, Microsoft Office package, and PASCO Capstone to meet the needs of different departments
- Benchmarked more than 70+ computers hardware for reuse purpose
Summer Research Scholar
Washington and Lee University, Lexington, Virginia, US
- Implemented features from research papers to distinguish Bot sessions and User sessions in log session files with intention to reduce 30% of unnecessary bot traffics that every website faces today
- Created models using R and Python to perform clustering of the web sessions
- Implemented unsupervised machine learning algorithms such as DBSCAN and K-means to perform clustering
- Developed a comprehensive understanding of WEKA and used it for testing various classifiers on our research data and feature selection on more than 20+ features
- Created generic R scripts for performing statistical analysis such as script to produce Recall and Precision box plot, Jaccard Index, T-test, number of clusters in DBSCAN, and Purities
- Learned Git and Version Control by collaborating with two professors and resolving git issues such as fast-forward, merging conflicts, stashing files, and restoring the deleted files in the local repository
Summer Camp Leader
Pearson College UWC, Victoria, Canada
- Organized cultural activities from Nepal, Ukraine, South Sudan, Peru, and Canada for 500+ children of age 6-14
- Led computer assembling and disassembling workshop with reusable PC for children ages 12-14 to help them discover if they enjoy playing with computer hardware
- Administered children’s safety during kayaking, swimming, and outdoor activities
- Identified conflicts related to bullying, discrimination, and physical contact between children and resolved them based on Camp policies
I am currently pursuing undergraduate degree at Washinton and Lee University. You can reach me via email.
204 W Washington Street, Lexington, Virginia
|
OPCFW_CODE
|
If you have a PC with more than one graphics card, you should check the settings as this can adversely affect how images display in the application.
If you have a PC, to identify the graphics card:
Click the Start button.
Type devmgmt.msc, and press Enter. Device Manager is displayed.
Click Display adapters. A list of adapters including the graphics card is displayed. For example, NVIDIA GeForce GTX 950KM.
If using an NVIDIA graphics card, refer to Checking NVIDIA Settings. If using an AMD graphics card, refer to Checking AMD Settings. If otherwise, refer to the graphics card's documentation to ensure the card is properly set up for the application.
Reminder: You only require to check NVIDIA settings if your PC has more than one graphics card.
To check the settings on a PC do the following:
Go to the Windows search box (at the bottom left of the screen, next to the Start).
Type in NVIDIA. The NVIDIA Control Panel app is displayed.
Click the app to start it.
On the taskbar on the left side, ensure Manage 3D settings is selected.
In the main pane of the app, there are two tabs: Global Settings and Program Settings.
Click Program Settings.
Under Select a program to customise, click the drop-down to display the available programs.
If the appropriate version of VStitcher (or Lotta, if appropriate) is displayed, click to select it and go to step 10.
If VStitcher (or Lotta, if appropriate) is not displayed, click Add. A list of programs is displayed.
Browse to the version of VStitcher (or Lotta, if appropriate) you are using and click to select it. Then click Add Selected Program.
At Select the preferred graphics processor for this program, click the drop-down.
From the available options displayed in the drop-down menu, ensure High-performance NVIDIA processor is selected.
At the bottom right of the NVIDIA Control Panel app, click Apply.
Close the NVIDIA Control Panel app.
Check if there is an update for graphics card drivers on the AMD website. Install the new updated driver if one exists.
On the desktop, right-click the background (away from any icons). A menu is displayed.
Select AMD Radeon Settings. The AMD Radeon Pro and Firepro Settings window is displayed.
Click Switch Driver Mode.
Select the latest Gaming Driver - the one with the highest version number.
Your system switches drivers.Your screen may flicker during this process.
After switching to gaming mode, the user interface displays Radeon Software and a red icon on the taskbar.
Click Driver Options, then:
Select Gaming Driver.
Select Install and Switch Mode.
Click to proceed with the installation and switch modes.Gaming Mode is the best graphics card setting for Browzwear software.
|
OPCFW_CODE
|
WEBAPP-363: Align event, category and poi models and endpoints
This pull request belongs to an issue on our bugtracker.
You can find it there by looking for an issue with the key which is mentioned in the title of this pull request.
It starts with the keyword WEBAPP.
Sorry for the large PR, but the changes are mostly updated tests because of changes in the models. I'm not quite sure if this is a step forward, but I tend to say yes. The endpoints and models are simpler than before. So just say what you think.
As a next step, I would generalize some of our components (EventDetail, Page and the PoiDetail (does not exist yet) and EventList, CategoryList and PoiList(does not exist yet) are mostly the same (with more or less info displayed)).
Haha yeah, in PRs like this, I wish GitHub had a function to provide a pre-defined filter to only show "important" changes...
So my thoughts are the following:
So if you want, we can do sth. like a class-hierarchy, but I don't really like your suggested structure. Your suggestion is:
PageModel
↑ ↑
PoiModel CategoryModel
↑
EventModel
And the DisclaimerModel didn't fit in there. Also, imo, a PoiModel is not a superclass of EventModel (in a sementical way -- probably looking at the attributes it kindof is). But in the future, we probably also should have a reference in an EventModel to the corresponding PoiModel (the location of the event); Allowing this PoiModel to be an EventModel doesn't make sense.
Another point, I don't like, is that you generalized all the endpoint-types. We should not do that. These types represent the actual data-representation we expect to get from the cms, and should not have unnecessary optional arguments and arguments we expect should not be optional.
A suggested hierarchy would be for me (although I'm also not 100% convinced of it):
BasePageModel
↑ ↑
DisclaimerModel ContentPageModel
↑ ↑ ↑
EventModel CategoriesModel PoiModel
Yeah I definitely see your point, thereto:
EventModel can definitely also inherit from PageModel, no problem here.
DisclaimerModel does fit in here, it has exactly the same attributes as categories, but I figured it would be a too big overhead to include this in this structure, since we only use three of these (title, content and modified_gmt), so it does not make sense imo to create a BasePageModel and a ContentPageModel just to fit in the Disclaimer. I would rather just use a CategoryModel for the disclaimer.
So the endpoints should just stay the way they were?
I think we should discuss this in a call. A thing that I miss in this issue/PR is a real benefit of changing our models.
We should start to groom tasks again, because there are much more important architectural stuff to change. For example to make the routing easier. This also would break the current react-native use of the endpoint package.
There is no real benefit apart from sticking to DRY, since our most of the attributes of our models were the same. What is the problem with react-native?
@maxammann My main reason for doing this is that we already have like 5 different Lists, ListIElements and also some detail views which are all roughly the same, but with different designs and approaches. I think this is not what it should be like and since I'm implementing the POI stuff, which will also need a List, ListElement and so on, this is, at least in my opinion, quite important. This PR just makes it easier for me to generalize those components, since the models are more similar than before.
@klinzo You could instead just generalize the lists; so for example you would probably use the same kind-of-list as we use for the events. Then you could just generalize the eventList (for example call it thumbnailList) and an according model if necessary (see also TileModel -- we use the Tiles both for categories and extras)
BTW I just saw, that the excerpt of the news shouldn't be contained in a RemoteContent, since it's a plain string we get from cms. (Also it looks odd, that there are 3 different font-styles for each news-entry)
Yes you should generalize the input of components. Then map the models to to the input. In java you sometimes create a View of objects so they have the interface you need. There are also interfaces in flow: https://flow.org/en/docs/types/interfaces/
We should also think about using a framework for components: https://hackernoon.com/23-best-react-ui-component-libraries-and-frameworks-250a81b2ac42
Main reason is accessibility. Also could force us to follow a more strict guideline.
Yeah sure, I'll generalize the lists, was also my plan. I think it would be good to know what the problem though and why this pr is bad, especially the react native topic. @maxammann
I think we should remove DisclaimerPage and use a PageModel which fits the disclaimer API. So we have a structure which makes sense.
The disclaimer api looks like this: https://cms.integreat-app.de/nuernberg/de/wp-json/extensions/v3/disclaimer
This would simplify the structure even more and would fit the models in the cms a bit more.
Apart from this I'm oke with merging :+1:
|
GITHUB_ARCHIVE
|
Siwei Ma focuses on Artificial intelligence, Computer vision, Algorithm, Coding tree unit and Discrete cosine transform. His research on Artificial intelligence frequently links to adjacent areas such as Pattern recognition. His Computer vision study combines topics in areas such as Distortion, Video quality and Augmented Lagrangian method.
His Algorithm research includes themes of View synthesis and Rate–distortion optimization. His work carried out in the field of Coding tree unit brings together such families of science as Multiview Video Coding, Speedup and Context-adaptive binary arithmetic coding. His Discrete cosine transform study deals with Speech recognition intersecting with Normalization, Block code and Iterative method.
His scientific interests lie mostly in Artificial intelligence, Computer vision, Algorithm, Pattern recognition and Coding tree unit. His is involved in several facets of Artificial intelligence study, as is seen by his studies on Pixel, Motion compensation, Data compression, Iterative reconstruction and Image quality. In most of his Computer vision studies, his work intersects topics such as Distortion.
His research in Algorithm intersects with topics in Real-time computing, Rate–distortion optimization and Discrete cosine transform. His research in Pattern recognition focuses on subjects like Image restoration, which are connected to Regularization. His Coding tree unit course of study focuses on Context-adaptive binary arithmetic coding and Sub-band coding and Macroblock.
His primary areas of study are Artificial intelligence, Algorithm, Computer vision, Deep learning and Decoding methods. Siwei Ma has included themes like Machine learning and Pattern recognition in his Artificial intelligence study. Siwei Ma has researched Pattern recognition in several fields, including Distortion and Feature.
As part of his studies on Algorithm, Siwei Ma frequently links adjacent subjects like Random access. The concepts of his Computer vision study are interwoven with issues in Visualization and Codec. His research integrates issues of Fingerprint, Speaker recognition, Speech recognition and Biometrics in his study of Deep learning.
Siwei Ma spends much of his time researching Artificial intelligence, Algorithm, Deep learning, Computer vision and Convolutional neural network. His Artificial intelligence research incorporates elements of Machine learning and Pattern recognition. His research in Algorithm tackles topics such as Random access which are related to areas like Interleaving and Decoding methods.
His studies deal with areas such as Image processing, Image, Reference frame, Absolute value and Cross entropy as well as Deep learning. Siwei Ma interconnects Visualization and Solid modeling in the investigation of issues within Computer vision. His Motion compensation research integrates issues from Image resolution and Motion field.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Rate-distortion analysis for H.264/AVC video coding and its application to rate control
S. Ma;Wen Gao;Yan Lu.
IEEE Transactions on Circuits and Systems for Video Technology (2005)
Fast mode decision algorithm for intra prediction in HEVC
Liang Zhao;Li Zhang;Siwei Ma;Debin Zhao.
visual communications and image processing (2011)
Pre-Trained Image Processing Transformer
Hanting Chen;Yunhe Wang;Tianyu Guo;Chang Xu.
computer vision and pattern recognition (2021)
SSIM-Motivated Rate-Distortion Optimization for Video Coding
Shiqi Wang;A. Rehman;Zhou Wang;Siwei Ma.
IEEE Transactions on Circuits and Systems for Video Technology (2012)
Adaptive rate control for H.264
Zhengguo Li;Wen Gao;Feng Pan;S. W. Ma.
Journal of Visual Communication and Image Representation (2006)
Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
Qi Mao;Hsin-Ying Lee;Hung-Yu Tseng;Siwei Ma.
computer vision and pattern recognition (2019)
Image Restoration Using Joint Statistical Modeling in a Space-Transform Domain
Jian Zhang;Debin Zhao;Ruiqin Xiong;Siwei Ma.
IEEE Transactions on Circuits and Systems for Video Technology (2014)
Image Compressive Sensing Recovery via Collaborative Sparsity
Jian Zhang;Debin Zhao;Chen Zhao;Ruiqin Xiong.
IEEE Journal on Emerging and Selected Topics in Circuits and Systems (2012)
Compression Artifact Reduction by Overlapped-Block Transform Coefficient Estimation With Block Similarity
Xinfeng Zhang;Ruiqin Xiong;Xiaopeng Fan;Siwei Ma.
IEEE Transactions on Image Processing (2013)
Blind Quality Assessment of Tone-Mapped Images Via Analysis of Information, Naturalness, and Structure
Ke Gu;Shiqi Wang;Guangtao Zhai;Siwei Ma.
IEEE Transactions on Multimedia (2016)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below:
|
OPCFW_CODE
|
Developer Interview: Mi Clos & Sigma Theory28 Sep 2017 0
Mi Clos is another studio we have a lot of time for. The great Owen Faraday himself gave their debut title Out There a glowing review. One of the their upcoming projects that we’re keeping an eye on is Sigma Theory, which is meant to offer a new spin on the Spy genre. The game was playable at Gamescom, but since we weren’t there we sent intrepid Pocket Tactics writer Michael Coffer to try and find out more…
Pocket Tactics: For those of us who haven't had a chance to demo the game SigmaTheory at Gamescom, what's the quick summary of who we are and what we're doing in this game?
Michael Peiffert: In this game, you are at the head of an intelligence agency of your country, and they have created this division because of a disruptive discovery in science. All the major powers want to race towards these new technological horizons. It's like a global futuristic Cold War and you aim to spy on the other countries and steal their secrets. You have four special agents that you recruit and send all over the world, assigning them secret missions. They locate important scientists on these missions and try to convince them by seduction, persuasion or force to come work for you. There are very few of these significant scientists, about twenty.
Pocket Tactics: Is victory determined by the first to get a certain number of scientists? Or are there a finite number of turns, with the most at the conclusion winning?
Michael Peiffert: It shouldn't be viewed as a simple winner-take-all strategy game. Many of the odds and mechanics which determine outcomes are not obvious to the player, just as in our other game Out There, so the game is also about developing relationships with the other organizations. How you proceed, who becomes your allies or enemies, the choices selected on missions combine to make an emergent narrative which will lead to different endings or different losing conditions.
Pocket Tactics: So it's like Out There in that there's a branching story and you're moving through that story, and you arrive at different points depending on the choices you make?
Michael Peiffert: Yeah and the narrative is tied to which technology you pick from the tech tree.
Pocket Tactics: Oh so there's a tech tree? I must have missed that in my research...
Michael Peiffert: There's a tech tree with five branches, covering stuff like robotics and neuroscience to finance. Each branch has five items to discover, and each of those represents a major discovery in the field. For example, you can discover immortality and then decide to keep it for your country exclusively, or sell it to a private lobbyist, or open source the protocol. Each would have different outcomes, sometimes vastly different. If you decide to sell it to a third party, maybe they will keep it for the elites. Actually, the tech tree is like the branching story of the game. When you have discovered enough breakthroughs, you can unlock the Sigma Theory. The 'S' in Sigma stands for singularity. Depending on which tech you uncover and what you do with it, the ending of the game will change.
Pocket Tactics: That's a very ambitious way to attempt a branching storyline. These other nations, how does the competition work?
Michael Peirffert: It's a single-player game. We've developed different AI for the other countries. There are 10 in the game, with different behaviours and personalities. Some will be aggressive, some will try to find allies. Everyone is fighting over the scientists. Along the way, nations might drop out of the game by losing all their agents or failing to secure any scientists. Some technology unlocks can harm specific countries. Countries can fall to coups d’état or in war waged against each other.
Pocket Tactics: Lots of ways to drop out of the race, then. Would you say that this is a cyberpunk game in some way, and if so, how?
Michael Peiffert: Not really. Actually, we wanted to put the player in a position of power and accountability and see how they handle things. Also, push the direction of the decision making. For example, you might be approached by a lobbyist who offers a bonus if you do what they want. The goal of this is to make a game where ultimately so that there are no absolutely good or bad decisions from a gameplay standpoint.
Pocket Tactics: It sounds more like futurism than cyberpunk, or even ‘near-futurism’. What inspirations have influenced your tech tree?
Michael Peiffert: For the technologies, we've taken inspiration from what exists and tried to project that into how it might evolve in the future. Artificial intelligence, military technology, all sorts of crazy stuff which might appear in the next ten years. SpaceX preparing to go to Mars, wealthy moguls funding longevity research...
Pocket Tactics: This game seems like it has a strong narrative focus but you're also making tactical calls with each mission. How is all that supposed to play out?
Michael Peiffert: When you assign an agent to a country, there are several ways to proceed. First the agent has to spot the scientist and then profile the scientist's traits and preferences. After learning this, you can select the best agent and one of several ways to recruit them. You can seduce him, convert him ideologically, attempt bribery or most dangerously attempt to take them by force. If you convince the scientist, the mission ends either with successfully exfiltrating with them in tow, or instead staying put and acting on your behalf as a double agent. Or if you bribed him, you need to send money occasionally, or spend quality time if you seduced the scientist. It is more like action but is also a bit like interactive fiction. Everything boils down to choices and consequences.
When you recruit the agents, it is from a pool of fifty. They have different backgrounds and characteristics; some might refuse to kill on command. It's a very diverse cast and each will have unique behaviors that will spice up the decisions.
Pocket Tactics: What does the rest of the development roadmap look like?
Michael Peiffert: We have implemented almost all of the main features. Currently we are working on the AI and polishing the narrative elements. We plan to have it available to the public early next year. Maybe with a beta or Early Access, we'll see. We really want to do something like Early Access because there are a lot of reactions to the gameplay or narrative and we want to see those. We're doing our best to make it available in this form.
Many thanks to Michael Peiffert of Mi Clos Studio for speaking to us at great length about their game. Sigma Theory is coming to iOS, Android and PC sometime in 2018.
|
OPCFW_CODE
|
50+ Python web scrapping scripts with proxy and mysql for only experts. Please don't bid if you are not expert. fix budget 5$ / script. Deadline 3 days more details in chat Thanks
The python script is written somehow connection is disruptive and cannot pull data in mysql database need to fix it and quickly make webpage to display data.
I scrape contact i...info including email address from a website the script on the website has split email address into multiple parts to avoid bots. I need somebody to fix it. no manual work just need to find a solution. Total listing is around 9k attach is file with just 500 listing Fix budget $10 OR Help me with python code for multi threading
I have a python script that gathers data through an API and requires the use of a Firefox browser to gather the data. Firefox recently required me to update to the latest browser and the script no longer works. I am sure this is a 20 or 30 minute fix (if not faster), but I'm glad to pay up to $100 to someone who can fix it for me in the next 8 hours
I need the attached Python script rewritten in pure Java with HtmlUnit and no non-Java dependencies (no Selenium/geckodriver/etc). This is a super-simple project, I need someone who can implement quickly no questions asked. The 7z file password is freelanc3r ABOUT ME: ======== I am "AfterHoursTech", a U.S.-based buyer. I have purchased close to 1,000
I have a working python script that generates RSS feeds in xml format from different sites. (E.g. blogspot sites, wordpress sites, etc.) TL;DR I need 4 things: 1.) solve "time out" error for selenium plugin 2.) edit script so one version can work without selenium if unnecessary 3.) edit script so no ads are scraped 4.) record/screen cap so I can edit
Hi, a python script was created for me a year ago. This python script did the following: automatic upload of files that are located in one folder to a website portal. For instance, I have a file called "[login to view URL]" on the drive W:Portal Input If I double click the portal python script it runs and louds it up to my client portal webs...
We have login script to login in to Instagram social media. There are some issue it does not work correctly, some time it stops without visible issue. Script written in Python and uses selenium driver. We are currently looking for some one who can fix this issue and probably work with this developer in future.
I have ppython script it should be save array into json data and mysql database . some scraping is not work . you need to fix and upload on my server. I am seeking for someone with knowledge of python and selenium
Hello, I have an python script I already added proxies in script, but I think it not working. because when i run the scrapper it give me error "max retry with this url ...." after some time. so i need to fix the problem asap. waiting for your bids.
I have a computer vision script in Python that analyze images from network cameras. After a few hours running, the script starts outputting the following error: error while decoding MB 0 14, bytestream (4035) [h264 @ 0x458bd60] non-existing SPS 2 referenced in buffering period [h264 @ 0x47b7300] non-existing SPS 2 referenced in buffering period [h264
i have python script that listen tcp data on my server on port 1234 and save to file, so i have device(time attendance) that send data to server:1234 so device ping server every 10 seconds and check if server request something. if yes device response to that request. so can u fix it? check the protocol , i can send my script so u can modiffy
Fix a problem with Python script - Must be good with Requests module and web scraping!
Hi i need to fix a feature of a my python script , i think it have a problem easily fixable here is the error : File "/Library/Frameworks/[login to view URL]", line 164, in get [login to view URL]() File "/Library/Frameworks/[login to view URL]", line 295, in wait [login to view URL]()
SMS Script which is in Python 3.6 needs fixing Message if you are able to apply fix. Specs: - Async Aiohttp - Python 3.6 - Supports proxies Current Problem: The problem right now that I am experiencing is that it is completing a verification process alright which is a good thing however when there are multiple verification taks, it is getting
...some explanations, comments and minor fixes to deliver to client. Eerything is fully analyzed in the ATTACHED doc. Also client says the python script crashes, why is this happening? The freelancer who created the script managed to run it. I also attach the documented code and the description of the initial project. Kimdly review and let me know if interested
Hello, i need someone to fix an existing python script using scrapy framework. The script / spider worked well for one year and scraped a site with a speed of 500items per minute using 50 dedicated private proxies. now the spider gets blocked / banned and i need an expert so solve this. you should know that one expert already failed, so it seems
What the script does : replace all keywords from 1 line in files/[login to view URL] with a word from files/[login to view URL] and mark the word as used. It's working fine with 99% of the content i need to change, but i have a problem with the last variables in filesToReplace[login to view URL] The picture shows my desired result and a short explanaion
ONLY PYTHON EXPERTS BID ON THE PROJECT. fix my session issue
Hi, I have found a script online that i require to be modified. The script can be found here: [login to view URL] The script listens for dns queries on a server and responds with the same IP address for all requests. Currently this is not working for me , as when i use the dig tool on linux the IP address
If you are good or master in python script, please message me, I want to fix a program written in python script so it could work
We would like to hire a freelancer well versed in Python to complete an application which involves web API queries and face morphing . The planned application takes as input two facial images of different individuals, one from a social media site and one a matched stock model image, and produces as output a "morph" of the two faces. The application
We want to hire a freelancer well versed in Python to complete an application which involves web API queries and face morphing . The application takes as input two facial images of different individuals, one from a social media site and one a matched stock model, and produces as output a "morph" of the two faces. The application is nearly completed
... But I've made a script in python to do it automatically. It works well if i use on one single account. However, some problem prevents everyone from renewing. It seems that the "browser" detects that you logged in very briefly and wanted to change in a short time and the web blocked it. Whatever it is, I would like for me to fix it. I am using the
Hello, I am after a python developer to fix a broken Nmap scan script(flask-Rest API) project and do some changes on it. I can't reach the original developer, so I need someone to help me out. The script is supposed to work in master-slave design and run multiple scans simultaneously. In short, there is a master that holds a DB for the scan results
I have a script written by some one. Its opening a webpage and load and load and clicking load more results button at the bottom of the webpage until its loaded all the results. Then start grabing. I don't know what happened, its suddenly throwing timeout error. Let me know if you can do but i need it very urgent. may be in few hours. Its very easy
I have a code where I need a fix in the python script to connect to Bluetooth devices and pair automatically.
|
OPCFW_CODE
|
Tylko z CVE
Tylko z CWE
Świeża lista CVE
Sprawdź nr. CVE
Sprawdź nr. CWE
W bazie CVE
Po nr. CVE
Po nr. CWE
Nic nie znaleziono w bazie WLB2
Common Weakness Enumeration (CWE)
Tor before 0.3.5.16, 0.4.5.10, and 0.4.6.7 mishandles the relationship between batch-signature verification and single-signature verification, leading to a remote assertion failure, aka TROVE-2021-007.
A denial of service vulnerability exists in the ASDU message processing functionality of MZ Automation GmbH lib60870.NET 2.2.0. A specially crafted network request can lead to loss of communications. An attacker can send an unauthenticated message to trigger this vulnerability.
Knot Resolver before 5.3.2 is prone to an assertion failure, triggerable by a remote attacker in an edge case (NSEC3 with too many iterations used for a positive wildcard proof).
liveMedia/FramedSource.cpp in Live555 through 1.08 allows an assertion failure and application exit via multiple SETUP and PLAY commands.
TensorFlow is an end-to-end open source platform for machine learning. In affected versions providing a negative element to `num_elements` list argument of `tf.raw_ops.TensorListReserve` causes the runtime to abort the process due to reallocating a `std::vector` to have a negative number of elements. The [implementation](https://github.com/tensorflow/tensorflow/blob/8d72537c6abf5a44103b57b9c2e22c14f5f49698/tensorflow/core/kernels/list_kernels.cc#L312) calls `std::vector.resize()` with the new size controlled by input given by the user, without checking that this input is valid. We have patched the issue in GitHub commit 8a6e874437670045e6c7dc6154c7412b4a2135e2. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
Exiv2 is a command-line utility and C++ library for reading, writing, deleting, and modifying the metadata of image files. The assertion failure is triggered when Exiv2 is used to modify the metadata of a crafted image file. An attacker could potentially exploit the vulnerability to cause a denial of service, if they can trick the victim into running Exiv2 on a crafted image file. Note that this bug is only triggered when modifying the metadata, which is a less frequently used Exiv2 operation than reading the metadata. For example, to trigger the bug in the Exiv2 command-line application, you need to add an extra command-line argument such as `fi`. ### Patches The bug is fixed in version v0.27.5. ### References Regression test and bug fix: #1739 ### For more information Please see our [security policy](https://github.com/Exiv2/exiv2/security/policy) for information about Exiv2 security.
An issue was discovered in PJSIP in Asterisk before 16.19.1 and before 18.5.1. To exploit, a re-INVITE without SDP must be received after Asterisk has sent a BYE request.
** UNSUPPORTED WHEN ASSIGNED ** Polipo through 1.1.1 allows denial of service via a reachable assertion during parsing of a malformed Range header. NOTE: This vulnerability only affects products that are no longer supported by the maintainer.
Possible assertion due to improper verification while creating and deleting the peer in Snapdragon Auto, Snapdragon Compute, Snapdragon Connectivity, Snapdragon Consumer Electronics Connectivity, Snapdragon Consumer IOT, Snapdragon Industrial IOT, Snapdragon Mobile, Snapdragon Voice & Music, Snapdragon Wired Infrastructure and Networking
Denial of service in SAP case due to improper handling of connections when association is rejected in Snapdragon Auto, Snapdragon Compute, Snapdragon Connectivity, Snapdragon Consumer Electronics Connectivity, Snapdragon Consumer IOT, Snapdragon Industrial IOT, Snapdragon IoT, Snapdragon Mobile, Snapdragon Voice & Music, Snapdragon Wearables
Back to Top
|
OPCFW_CODE
|
[openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues
jamielennox at gmail.com
Tue Feb 28 04:10:04 UTC 2017
On 27 February 2017 at 08:56, Sean Dague <sean at dague.net> wrote:
> We recently implemented a Nova feature around validating that project_id
> for quotas we real in keystone. After that merged, trippleo builds
> started to fail because their undercloud did not specify the 'identity'
> service as the unversioned endpoint.
> - (code merged in Nova).
> After some debug, it was clear that '/v2.0/v3/projects/...' was what was
> being called. And after lots of conferring in the Keystone room, we
> definitely made sure that the code in question was correct. The thing I
> wanted to do was make the failure more clear.
> The suggestion was made to use the following code approach:
> resp = sess.get('/projects/%s' % project_id,
> 'service_type': 'identity',
> 'version': (3, 0)
> However, I tested that manually with an identity =>
> http://............/v2.0 endpoint, and it passes. Which confused me.
> Until I found this -
> keystonauth is specifically coding around the keystone transition from a
> versioned /v2.0 endpoint to an unversioned one.....
> While that is good for the python ecosystem using it, it's actually
> *quite* bad for the rest of our ecosystem (direct REST, java, ruby, go,
> js, php), because it means that all other facilities need the same work
> around. I actually wonder if this is one of the in the field reasons for
> why the transition from v2 -> v3 is going slow. That's actually going to
> potentially break a lot of software.
> It feels like this whole discovery version hack bit should be removed -
> https://review.openstack.org/#/c/438483/. It also feels like a migration
> path for non python software in changing the catalog entries needs to be
> figured out as well.
> I think on the Nova side we need to go back to looking for bogus
> endpoint because we don't want issues like this hidden from us.
So I would completely agree, I would like to see this behaviour
it was done very intentionally - and at the time it was written it was
This is one of a number of situations where keystoneauth tried its best to
paper over inconsistencies in OpenStack APIs because to various levels of
effectiveness almost all the python clients were doing this. Any whilst we
have slowly pushed the documentation and standard deployment procedures to
unversioned URLs whilst this hack was maintained in keystoneauth we didn't
have to fix it individually for every client.
Where python and keystoneauth are different from every other language is
that the services themselves are written in python and using these
libraries and inter-service communication had to continue to work
throughout the transition. You may remember the fun we had trying to change
to v3 auth and unversioned URLs in devstack? This hack is what made it
possible at all. As you say this is extremely difficult for other
languages, but something there isn't a solution for whilst this transition
is in place.
Anyway a few cycles later we are in a different position and a new service
such as the placement API can decide that it shouldn't work at all if the
catalog isn't configured as OpenStack advises. This is great! We can
effectively force deployments to transition to unversioned URLs. We can't
change the default behaviour in keystoneauth but it should be relatively
easy to give you an adapter that doesn't do this. Probably something like
. I also filed it as a bug, which links to this thread , but could
otherwise do with some more detail.
Long story short, sorry but it'll have to be a new flag. Yes, keystoneauth
is supposed to be a low-level request maker, but it is also trying to paper
over a number of historical bad decisions so at the very least the user
experience is correct and we don't have clients re-inventing it themselves.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev
|
OPCFW_CODE
|
2.5 `ScrollView` and `ListView`
Content doesn’t always fit onto a single screen. For these cases, we have scrollable views and list views. In this lesson, you will learn about those important components.
1.Introduction3 lessons, 10:24
2.Code an App8 lessons, 1:00:47
3.Conclusion1 lesson, 01:13
2.5 `ScrollView` and `ListView`
Hi, and welcome back to Get Started with React Native. In this lesson, we are going to look at two important components of React Native that might not have a use case in the cost project, but are important for most apps nonetheless. Let's have a look at ScrollView first. It is useful if you have content that needs scrolling, like the name says. Meaning that the content is larger than what fits on the screen. A ScrollView is like a regular view, a wrapper component that accepts some additional parameters to configure its behavior. The most commonly used parameters are setting the arrangement by using the horizontal parameter, enabling or disabling paging or having a pull-to-refresh control. There is also very useful function that you can use to immediately scroll to the top. So, a ScrollView seems useful if you have many items that are displayed in the scrolling list. Well, not quite. If you have a big number of items where only a few of them are visible on the screen, it's better to use a ListView. It is more complex than a simple ScrollView, but only renders what's visible. The most essential part of a ListView is its data source. This defines the content of the ListView. To create it, you have to create a new instance of ListView.DataSource and pass a callback function that determines if a row has changed. One of these implementations uses just the equality operator. Whenever you change the data, you need to call cloneWithRows or cloneWithRowsAndSections if you have root group data. The last thing you need is to define the renderRow callback on the component, which defines what gets rendered for each row. There is also the renderSeparator callback that gets called whenever a separator between two rows is needed. If you don't implement it, there won't be any separation. If you want to handle touches on rows, just use TouchableHighlight and let it act like a button. Okay, then that's theory, let's get into examples. For the time being I'm commenting out our previous code in the render function of the init file and use it as a playground. For the ScrollView, let's see two examples. In the first one, we have a lot of text which, for example, happens in a description. I added some padding to the text itself so that paragraphs become more visible when scrolling. Without any custom value set on the ScrollView, you'll get a vertically scrolling view that does work quite well. Let's have a look at another common example. It's a scroll level folder view. I have an image to this public domain and I want to set a few parameters on the image itself, so it looks a bit nicer. Then let's change the horizontal and paging enabled properties to true. Notice that you can't use true directly. You have to use the curly braces around it, since it's a logical value. Now it looks like a photo browser. You can swipe to the right or left. And if you go over the threshold, it automatically shows the next page. Okay, let's have a look at the listView. As you already know, we need a data source. I'm going to initialize this in the Constructor. And set the state with it again by using cloneWithRows. Here, I'm adding some names. The ListView itself is quite simple. We specify the data source and create a callback to render a single row. I can even get fancy here and reuse the image from above, and use it as an additional element. The only thing I have to make sure is to wrap it in a single top-level component. We can offset the separator to create the true list. Just using an empty view component with a bit of styling is enough. There you go. Once you have a strategy on how to populate your data source, let's use a really simple and powerful instrument in creating your user interface. To recap, when you have content that potentially is larger than the screen, use the ScrollView. If you have a list of items that is much larger than what fits on the screen, it isn't the best option. For that, a ListView is the way to go. ListViews have a data source, which is queried for rows to be much more efficient. One of the most common use cases of ListViews is data fetched from a server. In the next lesson, we're going look at more advanced components, like the MapView, that sometimes are from a third party and need to be added to the project separately. See you there.
|
OPCFW_CODE
|
SIMPLE 1.0 is not a standalone suite for single-particle reconstruction, but complements existing developments. It is assumed that the windowed single-particle images represent 2D projections, and therefore, that the projection slice theorem applies (Bracewell, 1956). This requires correction of the micrographs due to the contrast transfer function (CTF) of the electron microscope and particle windowing using other software. In addition, the windowed projections should be roughly centered in the box. Many program suites are available for dealing with these and other matters (Frank et al., 1996; Hohn et al., 2007; Ludtke et al., 1999; Sorzano et al., 2004). In EMAN, the envelope component of the CTF, describing the fall-off of the signal with resolution, is parameterized and then used in the class averaging procedure in the refinement. Some implementations only phase-corrects the micrographs, ignoring the envelope function initially. Instead, the final refined maps are B-factor sharpened. SIMPLE 1.0 has so far been use in combination with the latter approach, assuming the same resolution fall-off for all micrographs. The SIMPLE 1.0 workflow is divided into four phases:
We usually execute phase (1) by parameterizing the CTF with Ctffid3 (Mindell and Grigorieff, 2003), doing CTF phase correction of the micrographs in Spider (Frank et al., 1996), and using EMAN’s (Ludtke et al., 1999) program BOXER for particle windowing. EMAN’s graphical user interface is used for quick inspection of volumes and class averages. UCSF Chimera is used for more detailed analysis of the reconstructed volumes (Pettersen et al., 2004). Phases (2)-(4) are executed with SIMPLE 1.0.
Program cluster aligns the images with the reference-free 2D alignment algorithm, clusters them, and calculates class averages. The outputted class averages are inputted to program origami, which optimizes the spectral ensemble common line correlation with simulated annealing (Elmlund et al., 2010). The search space of the annealing is composed of projection directions only. Consequently, many thousands of class averages can be aligned in a few hours on a present generation workstation. The annealing is restarted three times on random starting configurations. The best of the three solutions is subjected to further optimization. The second optimization step includes the additional origin shift and conformational state parameters in a greedy adaptive local search-based scheme, steering a differential evolution optimizer (Das et al., 2009; Feo and Resende, 1995). Output from origami consists of alignment documents and reconstructed volumes. Origami offers the possibility to partition the data over multiple state groups. At this stage, however, it is seldom meaningful to attempt heterogeneity analysis, since the in-plane alignment is too crude for the classification to provide the required resolution. For implementation details, see the open SIMPLE 1.0 source code (classes simple_comlin, simple_comlin_corr, simple_rfree_search, simple_sa_opt and simple_de_opt).
Accurate in-plane alignment is required for the unsupervised classification to produce homogeneous class averages (Elmlund et al., 2010). The approximate in-plane alignment obtained by the reference-free 2D alignment algorithm therefore needs refinement. In SIMPLE 1.0, projection matching onto the ab initio reconstruction is applied to refine the in-plane degrees of freedom. The continuous, Fourier-based projection matching executed by programs cycler or align searches all degrees of freedom. The refined in-plane parameters are inputted to program cluster that generates class averages with improved resolution. These averages are again subjected to ab initio alignment and heterogeneity analysis in program origami, which searches projection directions and state assignments only. After the first approximate ab initio reconstruction has been obtained, a single iteration of bijective orientation search is composed of:
The process is iterated until the structural features of the reconstructions and the number of states is consistent between successive rounds. Partitioning the data into different number of state groups is fast and usually uncovers the suitable number of states to reconstruct. The bijective orientation search method reduces computational complexity, is free from reference bias, and enables efficient reconstruction of flexible single-particles. After convergence, single-particle refinement is applied as has been done by others to improve the resolution of the maps.
The align, reconstruct, and automask programs are iterated to accomplish high-resolution refinement. Program cycler combines these components into a single executable that is suitable for shared-memory multi-processor architectures, since the parallelization is based on the OpenMP protocol. For clusters we recommend distributed execution via partition_master.pl found in the apps folder.
Heterogeneity analysis via bijective orientation search. Programs, flow of data, constraints, and parameters. N is number of images, C is number of clusters. The 3D reference-based alignment searches all degrees of freedom: the Euler angles, the origin shifts, and the state assignments. The reference-based output is used to align the data in-plane before image clustering. Class averages are subjected to reference-free common lines-based assignment of projection directions and states. A first approximate 3D alignment is obtained in a discrete angular space composed of projection directions only, subject to the constraint that no two averages are allowed to occupy the same direction. A differential evolution algorithm searches the remaining parameters. Unbiased volumes are reconstructed and used for another round of projection matching.
|
OPCFW_CODE
|
fix(propEq): improve propEq typings
remove unnecessary constraint from propEq value: the function should receive any type values of object, no matter what parameter it received;
add additional types to use with placeholder
remove unnecessary constraint from propEq value: the function should receive any type values of object, no matter what parameter it received;
I'm afraid I have to disagree with this. It is not unnecessary at all. It is important for type safety. With the current implementation, you get this level of safety
import { propEq } from 'ramda';
const doesFooEq123 = propEq(123, 'foo');
// ^? (obj: Record<'foo', number>) => boolean
// now consider how this gives a type error
'123' === 123; // TypeError: This comparison appears to be unintentional because the types 'string' and 'number' have no overlap.
// With the current typing, you get a similar error
doesFooEq123({ foo: '123' }); // Type 'string' is not assignable to type 'number'.
Playground: https://tsplay.dev/wgBX9W
add additional types to use with placeholder
The added currying support with Placeholder is fine, however there are a few situation that don't make sense to me, I'll like inline comments for those
I looked at your example and added an additional test:
type Obj = { key: 'A' | 'B' };
function doesEqKey(val: string, obj: Obj) {
return obj.key === val;
}
typescript here gives no issue. The reason why I found is because at last one side as assignable to the other. However, do understand that val would not be assignable to obj.key
obj.key = val; // Type 'string' is not assignable to type '"A" | "B"'.
So for assoc, we would want to keep the way it is typed, same as propEq is now. But you are correct that propEq does need to be loosened a bit.
However, any is still not appropriate here, because we don't want to mix types.
function doesEqKey(val: number, obj: Obj) {
return obj.key === val; // This comparison appears to be unintentional because the types 'string' and 'number' have no overlap
}
We do have a solution for what you are looking for you. In types/util/tools.d.ts use WidenLiterals<T>
import { WidenLiterals } from './util/tools';
export function propEq<K extends keyof U, U>(val: WidenLiterals<U[K]>, name: K, obj: U): boolean;
What this will do is collapse down val from 'A' | 'B' to be string. This should properly mimic the behavior for both doesEqKey functions I have above. Particularly that any string can be compared to 'A' | 'B', but not number or other types that don't overlap
Thank you for the tip about WidenLiterals type. It did help me and fixed some of the false negative results that I had.
Unfortunately, there are a lot of other kinds of errors that I wasn't able to fix. I created another draft PR here, where you can find more tests and the version of propEq typings that utilizes WidenLiterals, but as you can see, most of the tests are failing. Moreover, other tests are failing too, like anyPass and allPass.
const isOld = propEq(212, 'age');
const isAllergicToGarlic = propEq(true, 'garlic_allergy');
const isAllergicToSun = propEq(true, 'sun_allergy');
const isFast = propEq(null, 'fast');
const isAfraid = propEq(undefined, 'fear');
const isVampire = anyPass([
isOld,
isAllergicToGarlic,
isAllergicToSun,
isFast,
isAfraid
]);
expectType<boolean>(
isVampire({} as {
age: number,
garlic_allergy: boolean,
sun_allergy: boolean,
fast: boolean | null,
fear: boolean | undefined
})
);
I don't see the reason, why this test should fail, but as you can see, it does, because 'fast' and 'fear' are expected to be of 'null' and 'undefined' types respectively.
These errors are popping out all over the place and cause pain to work with propEq.
@Nemo108 I checked and rechecked what we need to do and why. Both your MRs at a high-level 3 things
Better typing for val when we don't yet know the type for obj
Full currying
Handling when obj is an array
That is a lot to check, verify, and agree on in a single MR. I would like to suggest we split these into 3 MRs, and focus on one part at a time.
I made the first MR that just focuses on the first case: https://github.com/ramda/types/pull/74
Please let me know what you think of my analysis to the problem and my proposed solution
Given the happenings in both https://github.com/ramda/types/pull/99 and https://github.com/ramda/types/pull/73. I went back to this MR to look at the changes made that we left off on. Given the now known limitations about trying to do the other improvements (particularly with arguments, possibly returning never there, and cross function type support), what is in this MR currently is the best option I think to move forward with in the short-term.
The code changes do not solve all the issues called out, but it's better than nothing. I'm going to give it a thorough re-test and see if we can try and get this merged after https://github.com/ramda/types/pull/99
|
GITHUB_ARCHIVE
|
This is a Java Peer-To-Peer DHT based on Chord algorithm. The system aims to be simple, and also generalization to other algorithms.
The system has two services, “put” and “get”. “put” and “get” can be used without HTTP.
“put” is a service of exchange to put resource into the system. “get” is a service of get resources from the system.
The system is based on Chord DHT. Using the jDHTUQ to fill out in the Chord DHT.
Current stable version of jDHTUQ v2.0.1
Current development version of jDHTUQ v2.0.2
Current development version of jDHTUQ v3.0.0
* Reference to RFC 6627 added.
* Reference to RFC 6298 added.
* Wording of “Change of origin” added.
* Dependency to JUnit added.
* Using Apache’s ‘org.mockito’ added.
* Changed source code jar to be a dependency.
* Clean up the project structure.
* Default port is now 8789.
* First public release of jDHTUQ v2.0.1
* First public release of jDHTUQ v2.0.0
* First public release of jDHTUQ v1.0.0
* Added source code to jDHTUQ.
* First public release of jDHTUQ v1.0.0-rc2
* Fixed source code.
* First public release of jD eea19f52d2
Process Hacker is an advanced task manager with a highly customizable user interface. It allows you to easily handle processes, analyze their resource usage, and terminate/suspend them. The application can be used to collect data from multiple sources, monitor services, and take part in important system events. With a modern look and powerful features, Process Hacker is a tool you’ll be able to rely on.
Process Hacker is a powerful and useful tool that can be used by individuals who need to make decisions about their personal computing. The application can analyze and log the usage of system resources, as well as the execution of different software. All the gathered information is presented in an intuitive and easy to understand manner. The application can be used for educational purposes, and it also features various reports that help you gain useful insights into system functioning.
Process Hacker History
Process Hacker 3.6.0 Crack comes packed with a lot of great features to enhance your Windows PC experience. You can customize Process Hacker and add all sorts of gadgets, widgets, and controls to take advantage of your computer in the most meaningful way.
Process Hacker can log system events in order to quickly access information about computer performance and how your system is configured. You can also analyse the usage of the system resources. You can also tweak your operating system in a very useful way in order to optimize it. You can change the process priority, terminate unwanted processes, or modify the behaviour of important processes.
Process Hacker can also create and edit processes. You can also edit and create shortcuts and create launchers. You can also easily add gadgets and widgets to your desktop. Process Hacker Pro Crack has a streamlined interface that makes it easy to navigate, explore, and modify all kinds of information. You can also find many useful reports that provide you with useful data about your PC.
Process Hacker 3.6.0 Serial Number supports multiple workspaces. In this way, you can have different sets of tools that help you with different purposes. Process Hacker 3.6.0 Serial Key can help you monitor your performance, analyze your CPU usage, and manage services. This feature allows you to make use of different system resources. You can also log and analyse key system events.
You can also take advantage of a lot of useful and helpful features. You can also easily organise your tools in order to work in an intuitive and helpful manner. Process Hacker Keygen is a perfect tool for beginners who want to know about computer system information. You can also change the Windows registry settings to get more information about
|
OPCFW_CODE
|
What is the most effective grammar to use for issue titles?
When choosing a title for a new issue, there are several grammatical options to choose from. Here are four different options for an example issue I just made up:
Descriptive: "Newlines are not stripped when multi-line headers are parsed"
Imperative - negative: "Avoid leaving newlines when parsing multi-line headers"
Imperative - context first: "Fix parsing of multi-line headers: strip newlines"
Imperative - desired action first: "Strip newlines when parsing multi-line headers"
At my workplace the issue titles alternate between all of these formats, but I keep thinking it's better to pick a convention and stick to it.
Which option (from the above, or a different one) is the preferred one, i.e. the one that will minimize the effort of developers reading these issues?
Edit: as noted in the comments, different issue types may warrant a different choice of grammar, where an especially important distinction is between bugs and new work. To make the question more specific, let's say we choose, as msell offers in his answer, the descriptive option. Now let's look at a suggestion to improve an existing feature:
"Add city information to geolocation"
If we stick to the descriptive grammar, we should replace that title with
"City information missing from geolocation"
However, this seems less clear, since the lack of city information is not a bug, it is the expected behavior. The reporter is merely suggesting to improve that behavior. So it seems the imperative option is the way to go with that issue.
Let's look now at an entirely new feature. Consider these options:
"No ability to export report to XML"
"Add ability to export report to XML"
"Export report to XML"
Describing the current behavior seems even more ridiculous here, but the imperative option doesn't look so good too. The third option simply describes the new feature itself, which might be the way to go here, assuming the issue has meta-data that indicates this is a “new feature” issue.
So what's more important - sticking to one convention, or matching the grammar to the issue?
Where would you draw the line between a descriptive title and an imperative title?
When would you describe the current behavior and when the desired behavior?
One problem with the imperative variants is that they assume that you know the solution when creating the issue.
Whichever form is most descriptive is the one to use, and that will vary from issue to issue.
do you only track bugs or features/new work as well? We use your "imperative" more for new work but I tend to lean towards "descriptive" for bugs because a) as pointed out above, you may not know a solution but b) if someone sees this bug in the future (potentially in the field with older release), much higher chance he'll connect the dots if you give him the description of what he might be seeing
In the title I prefer describing the bug, not the fix. Also as opposed to pdr's answer, I focus on the actual bug, not what I was trying to achieve. There is plenty of space available in the detailed description for the expected behaviour, background information and even proposals on how to fix the bug.
To help me writing good titles for bugs, I try to come up with a description that sounds good in release notes. Consider the given examples:
Fixed bug #1001: Newlines are not stripped when multi-line headers are parsed
This sounds good.
Fixed bug #1001: Avoid leaving newlines when parsing multi-line headers
It is a little unclear if the bug is about leaving newlines or avoiding them.
Fixed bug #1001: Fix parsing of multi-line headers: strip newlines
While the bug in this format is clear, it sounds clumsy with the repetition of the word "fix" and usage of an additional colon.
Fixed bug #1001: Strip newlines when parsing multi-line headers
This has the same problem as the second example. It's unclear if the described line is the correct or incorrect behaviour.
Thus I prefer the first example "Descriptive".
A benefit of this approach is that less effort is needed when creating the release notes, as the information is already (mostly) in correct format. However even if release notes are not needed, this guideline still works well and produces bug titles that are easy to read and understand.
+1 Using the release notes as a criterion is an excellent idea. I think this could also help with the other types of issues I mentioned when editing the question.
Always why, rather than what. From the perspective of the user, what is the actual problem you're trying to solve? For example,
Search doesn't work if looking for a string that spans two lines.
It's impossible to tell from any of your examples. The problem being that you can't prioritise tasks, unless you have someone handy who knows why those changes need to happen.
If, when creating an issue, you have an idea what the solution is, write that into the comment instead. It's no effort for a developer to read the whole issue but, when you're prioritising, you need to look at that issue in the context of other issues. So it's important to have useful information to that end.
But often there are bugs which do not have a single consequence in terms of user experience. Maybe the fact that newlines are not stripped has several effects in other parts of the code, which causes multiple consequences for the user, or perhaps zero consequences for now, but there might be ones in the future.
@Joe: Here's the thing -- how do you know that not stripping the new lines is the single cause of any of those problems? By the time you've dug deep enough to be sure, you may as well type .Replace(...) and be done, without raising a ticket. I would raise one for each visible issue, and if one fix should happen to fix another, close them both down.
I agree that once you find unexpected behavior you should open an issue describing it before you start digging in the code to find the cause, as the digging is actually part of working on the issue. What I'm saying is that often the behavior you find is not in the user experience, but rather in some internal function. It would then require digging to find the consequence in terms of the UX, if there is any. Therefore the issue title should describe the first unexpected behavior you encounter, be it in the UX or internal.
|
STACK_EXCHANGE
|
The world of computing has been bound by the binary code for decades. Conventional computers, operating solely on the language of 1s and 0s, have relied on an intermediary step known as programming languages to translate human intent into machine actions. This approach, while powerful, has always had its limitations. But what if we could break free from the constraints of programming languages and instruct machines directly? The answer lies in the revolutionary technology of WantWare, where software evolves into a new paradigm of human-computer interaction.
The Binary Conundrum
In the realm of conventional computing, the binary code reigns supreme. Machines, devoid of human-like understanding, process information in the form of binary digits – 1s and 0s. While effective for computational tasks, this binary language is far removed from how humans naturally communicate. In the world envisioned by science fiction, we see characters effortlessly interacting with machines, as if conversing with a fellow human. The reality, however, is quite different. Few individuals write or interpret binaries in their daily lives, and for a good reason.
Programming Languages: The Bridge Between Worlds
Programming languages serve as a bridge between the binary world of machines and the expressive realm of human language. They allow software developers to convey their intentions to computers through a syntax that computers can understand. This intermediary step, however, comes with its own set of challenges. Learning a programming language can be as daunting as learning a foreign tongue. It requires specialized knowledge, training, and experience.
Moreover, programming languages can introduce errors and vulnerabilities, often leading to software bugs and security breaches. The complexity of these languages makes it difficult for non-experts to engage with computing systems directly, limiting the democratization of technology.
The WantWare Revolution
Enter WantWare, a transformative technology that heralds the evolution of software. Unlike traditional programming languages, WantWare operates on the principle that machines can be instructed without the need for an intermediary code. This groundbreaking approach redefines human-computer interaction by introducing the concept of Meaning Coordinates.
Meaning Coordinates are a series of analog values that define the limits of possible meaning for specific terms and contexts. They serve as a universal language bridge between humans and machines, encompassing natural language, signs, symbols, semantics, and syntax. In essence, they enable machines to understand human intentions without the intricacies of programming languages.
With WantWare, software evolves into a dynamic, human-centric tool. Instead of writing lines of code, users can interact with machines in a language that feels natural to them. WantWare embodies the idea of “Software Evolved,” where technology aligns seamlessly with human intent, transcending the boundaries of traditional software.
Impact on Humanity
The implications of WantWare’s evolution are profound. It democratizes technology, making it accessible to a broader audience. Anyone can interact with machines, communicate their intentions, and harness the power of computing without the need for extensive programming knowledge.
Moreover, the removal of programming languages from the equation reduces the likelihood of errors and security vulnerabilities. WantWare paves the way for safer, more reliable software systems, critical in an increasingly interconnected world.
WantWare represents a leap forward in the evolution of software and computing. It breaks free from the constraints of programming languages, enabling humans to interact with machines in their own language. This technology holds the promise of a more inclusive, secure, and efficient digital future. As software evolves into WantWare, the boundaries between humans and machines blur, ushering in an era where technology aligns seamlessly with human intent, ultimately benefiting humanity as a whole.
|
OPCFW_CODE
|
The theory of tight submanifolds starts with attempts to generalize theorems about convex surfaces to topologically more complex surfaces such as the torus. For surfaces, it is possible to develop this generalization in terms of an elementary notion, the two-piece property, which then leads to the study of critical points of height functions and the theory of total absolute curvature. These notions can then be applied for higher-dimensional objects in higher-dimensional Euclidean spaces, producing a rich collection of examples and theorems in the global geometry of submanifolds.
An object in ordinary 3-dimensional space is said to have the two-piece property (TPP), if any plane cuts it into at most two pieces. Examples of surfaces with the TPP are spheres and ellipsoids and, more generally, the boundary of any bounded convex body. There are also non-convex objects with boundaries that have the TPP, for example a torus of revolution, or, more generally, a surface of revolution obtained by revolving a convex curve around an axis in the plane of the curve not meeting the curve.
If we deform a sphere into a non-convex surface, for example a U-shaped object, or a sphere with a dent in it, the resulting surface will not have the TPP.
For closed subsets, the TPP is equivalent to the condition that the intersection of the object with every closed half space is connected.
For compact surfaces (without boundary), the TPP is closely related to the study of critical points of height functions. Any plane in space can be considered a level set of a height function in a direction perpendicular to the plane. If a plane cuts a surface into more than two pieces, then a height function perpendicular to this plane must have at least one maximum or minimum on each piece. It follows that if a surface which does not have the TPP, there must be a height function with at least two (strict) local maxima on the surface. Conversely if a height function has two strict local maxima on a surface, then the half space above the level set containing the lower of the two will intersect the surface in at least two pieces. It follows that a surface has the TPP if and only if no height function restricted to the surface has more than on strict local maximum. A surface with this property is called tight. An equivalent definition for a surface to be tight is the condition that every local support plane be a global support plane.
The TPP is a topological condition so it applies to any surface in space, whether it be smooth, polyhedral, or just a one-to-one contiuous image of such a surface. If the surface happens to be sufficiently smooth, then it is possible to characterize the tightness condition in terms of its total or Gaussian curvature. Any point of positive curvature is a local extremum of the height function perpendicular to the tangent plane at the point. so the tightness condition implies that in any direction there is at most one point on the surface with positive curvature that is critical for that direction. For almost any direction, the maximum of the height function in that direction on the surface will occur at a point of positive curvature. It follow that if a smooth surface is tight, then the only strict local maxima of any height function must occur on the `outside', where the surface intersects its convex hull, the smallest convex set containing the surface.
For a smooth surface embedded in 3-dimensional space, tightness can be expressed in terms of the Gauss spherical image mapping, which sends each point of the surface to the point of the unit sphere centered at the origin with the same outer unit normal vector. (This definition assumes that a consistent field of unit normals has been chosen over the whole surface.). For any smooth surface without boundary, almost every point of the sphere is the image of at least one point with positive curvature, so the total area of the spherical image of the positive curvature part is at least 4 pi. For a tight smooth surface, almost every point of the sphere is the image of exactly one point of the surface with positive curvature, so the total area of the spherical image of the postive curvature part of the surface achieves the minimum value, namely 4 pi. Originally this property was used as the definition of tightness for smooth surfaces in ordinary space.
Although the first definition of tightness was given in terms of curvature, the critical point or TPP reformulation is much broader in scope. It applies not only to smooth surfaces in space but also to polyhedral surfaces. The critical point condition and the TPP extend naturally to surfaces embedded in higher-dimensional Euclidean spaces, and to immersions and to mappings with singularities.
It was Nicolaas Kuiper who made the first wide-ranging and systematic study of tight embeddings and immersions of surfaces, in three dimensions and higher. He produced tight embeddings of all orientable surfaces and tight immersions of all but three non-orientable surfaces in 3-dimensional space. He proved that two of the remaining surfaces, the real projective plane and the Klein bottle, could not be immersed tightly in 3-space even as topological surfaces and he conjectured that the final case, a real projective plane with one handle, could not be immersed tightly into 3-space. Only recently was Francois Haab able to prove that conjecture for smooth immersions, and even more recently Davide Cervone produced a surprising example to show that this surface can be immersed tightly as a polyhedral surface.
There are still a number of unsolved problems concerning immersions of surfaces into 4-space, but thanks to the work of Kuiper, the situation for five-space is better understood. First of all, Kuiper showed that any smooth immersion of a surface into Eucildean n-space for n >= 5 must lie in a 5-dimensional affine space, and moreover, if the image does not lie in a 4-dimensional subspace, then the surface is the real projective plane and the immersion is affinely equivalient to the Veronese embedding, an algebraic surface. An even stronger result by Kuiper and William Pohl states that any topological tight embedding of the real projective plane into 5-space, not lying in a 4-space, must be either the smooth Veronese embedding or a simplexwise linear embedding of a triangulation with exactly six vertices.
In order to appreciate the nature of the theorems of Kuiper, it is useful to consider the TPP and tightness for closed curves in Euclidean spaces. A convex curve in the plane has the TPP, whether it is smooth or polygonal or a more general topological embedding of the circle. If an embedded curve in the plane is not convex then there it does not coincide with the boundary of its convex hull, and there is a segment in the convex hull boundary containing points not in the curve. A line containing this segment bounds a half spece meeting the curve in at least two pieces, so a non-convex plane curve does not have the TPP. Furthermore, if a curve is not planar, then there are four points on the curve, in cyclic order, not lying in a plane, and it is possible to find a plane with the first and third on one side and the second and fourth on the other, thus separating the curve into at least four pieces. Thus a TPP curve in n-space is already contained in an affine 2-dimensional space.
In the case where the curve is smooth, we may recast the above result in terms of curvature to obtain a famous theorem in global differential geometry due to Werner Fenchel: total curvature of a smooth closed curve in any dimension is at least 2 pi, and if it is exactly 2 pi, the curve is convex.
For 2-dimensional surface, the TPP restricts not only the number of maxima and minima of height functions but also the total number of critical points. This follows from the critical point theorem of elementary Morse theory: for almost every height function on a smooth surface, the only critical points are maxima, minima, and ordinary saddle points, and the number of maxima plus the number of minima minus the number of saddles is constant, equal to the Euler characteristic of the surface, also described as the number of vertices minus the number of edges plus the number of triangles in any triangulation of the surface. By `integrating' this theorem over all height functions, we obtain one of the most famous of all theorems in global differential geometry, the Gauss-Bonnet Theorem, relating the integral of the total curvature of a smooth surface to its Euler characteristic. We may use this fact to obtain other characterizations of tightness.
For higher-dimensional manifolds, the situation is quite different. The two-piece property no longer serves to place such a strong restriction on the nature of the critical points of height functions. We say that an n-manifold is tight if the intersection of the object with any half-space is no more complicated than it has to be, i.e. the homology of the intersection is not greater than the homology of the whole object. For example, if a manifold is simply connected, we require that the intersection with every half space also be simply connected. Thus a closed hemisphere has the TPP but it is not tight since it is simply connected but there is a half space that intersects it in a circle.
Morse theory gives lower bounds for the numbers of critical points of various types for almost all smooth functions defined on a higher-dimensional manifold. Tightness for higher-dimensional submanifolds of a Euclidean space requires that almost all height functions have the minimal number of critical points. Fenchel's theorem was generalized by Chern and Lashof by considering the Lipschitz-Killing curvature of a submanifold of a higher-dimensional Euclidean space. The total measure of the absolute value of this curvature is equal to the integral over the sphere of the number of critical points of height functions on the submanifold. Fenchel's theorem is generalized by the result that an m-dimensional sphere is immersed with minimum total absolute curvature if and only if it is a convex hypersurface in an affine subspace of dimension m+1.
In this article, we will develop the theory of tight submanifolds primarily in the smooth and polyhedral situations. A related article based on the work of Nicolaas Kuiper will develop this theory for topological immersions.
|
OPCFW_CODE
|
- What is the difference between . and - in alignments ?
cmalign alignments, - means a nucleotide is missing compared to the covariance model. It represents a deletion. The dot '.' indicates that another chain has an insertion compared to the covariance model. The current chains does not lack anything, it's another which has more.
In the final filtered alignment that we provide for download, the same rule applies, but on top of that, some '.' are replaced by '-' when a gap in the 3D structure (a missing, unresolved nucleotide) is mapped to an insertion gap.
- Why are there some gap-only columns in the alignment ?
These columns are not completely gap-only, they contain at least one dash-gap '-'. This means an actual, physical nucleotide which should exist in the 3D structure should be located there. The previous and following nucleotides are not contiguous in space in 3D.
- Why is the numbering of residues in my 3D chain weird ?
Probably because the numbering in the original chain already was a mess, and the RNANet re-numbering process failed to understand it correctly. If you ran RNANet yourself, check the
logs/ folder and find your chain's log. It will explain you how it was re-numbered.
- What is your standardized way to re-number residues ?
We first remove the nucleotides whose number is outside the family mapping (if any). Then, we renumber the following way:
0) For truncated chains, we shift the numbering of every nucleotide so that the first nucleotide is 1. 1) We identify duplicate residue numbers and increase by 1 the numbering of all nucleotides starting at the duplicate, recursively, and until we find a gap in the numbering suite. If no gap is found, residue numbers are shifted until the end of the chain. 2) We proceed the similar way for nucleotides with letter numbering (e.g. 17, 17A and 17B will be renumbered to 17, 18 and 19, and the following nucleotides in the chain are also shifted). 3) Nucleotides with partial numbering and a letter are hopefully detected and processed with their correct numbering (e.g. in ...1629, 1630, 163B, 1631, ... the residue 163B has nothing to do with number 163 or 164, the series will be renumbered 1629, 1630, 1631, 1632 and the following will be shifted). 4) Nucleotides numbered -1 at the begining of a chain are shifted (with the following ones) to 1. 5) Ligands at the end of the chain are removed. Is detected as ligand any residue which is not A/C/G/U and has no defined puckering or no defined torsion angles. Residues are also considered to be ligands if they are at the end of the chain with a residue number which is more than 50 than the previous residue (ligands are sometimes numbered 1000 or 9999). Finally, residues "GNG", "E2C", "OHX", "IRI", "MPD", "8UZ" at then end of a chain are removed. 6) Ligands at the begining of a chain are removed. DSSR annotates them with index_chain 1, 2, 3..., so we can detect that there is a redundancy with the real nucleotides 1, 2, 3. We keep only the first, which hopefully is the real nucleotide. We also remove the ones that have a negative number (since we renumbered the truncated chain to 1, some became negative). 7) Nucleotides with creative, disruptive numbering are attempted to be detected and renumbered, even if the numbers fell out of the family mapping interval. For example, the suite ... 1003, 2003, 3003, 1004... will be renumbered ...1003, 1004, 1005, 1006 ... and the following accordingly. 8) Nucleotides missing from portions not resolved in 3D are created as gaps, with correct numbering, to fill the portion between the previous and the following resolved ones.
- What are the versions of the dependencies you use ?
cmalign is v1.1.4,
sina is v1.6.0,
x3dna-dssr is v1.9.9, Biopython is v1.78.
|
OPCFW_CODE
|
Everybody is trying to build a self driving car today. Google has been testing their solution for the past ten years or so, Tesla just announced they'd be putting the "self driving hardware" onto their newly manufactured cars, Uber has a big effort with Volvo in Pittsburgh, comma.ai is trying to ship a box for outfitting certain cars with a self driving mode etc. Obviously the car manufacturers are following with Ford making announcements recently, BMW working silently and so on and so on. Some of these efforts are explicitly cautious on what they promise (driver assist technology rather than full autonomy such as e.g. Toyota), but many voices, particularly the VC's from the Bay area are hyperactive announcing how the life will be great and how the self driving car (in the sense of full autonomy) is a done deal.
Well I would not be a sceptic if I did not put all those hyper-optimistic statements to doubt. Let me go through a few claims about self driving cars one by one and put my sceptical comment next to each statement. To be frank: I'm not against the technology, I'm against the hype.
- Self driving cars will be safer than human drivers. Given all of the fancy sensor technology on those vehicles, they will be safer in the sense of not bumping into any static obstacles. But even though the absolutely basic level of obstacle perception can be solved with technology, the more advanced reasoning about the situation and broader context remains elusive. The car knows there are other cars on the road, can even detect turn signals, but cannot read the intention of those on the road, neither can it deal with unusual situations. The autonomous cars cannot be well communicated with verbally (such as drivers can) therefore cannot be easily instructed to follow a particular pattern in a given unusual situation and so on. In this broader sense the self driving cars can be actually more dangerous then an alert human driver.
- Self driving cars will hugely improve safety on the road. Maybe. But this stuff is tricky to measure. Statements like: self driving car is safer than an average human in metric "X" can be misleading. First of all, metric "X" can be irrelevant. But even if it is relevant, the distribution of readouts of "X" for human drivers may not have a very well defined mean (e.g. if that distribution is a long tail). Consequently the mean can be substantially affected by a few outliers (e.g. a group of very unsafe drivers such as drunk teenage) and a great majority of drivers may be way safer than an "average driver". Now sending as fleet of autonomous cars safer than the "average human" may in fact drastically decrease safety on the road.
- Cars sit on the lot for a greater part of the day, autonomy will allow for better usage. To some degree yes. But the unevenness of usage of cars during the day stems not only from the fact that they need to be driven, but purely from the fact that the demand for transportation varies enormously during the day. That is, if we build enough autonomous cars to cover the peak traffic hour, then majority of these cars will be sitting on the lot for the rest of the day much like traditional cars do. Just because there will be no demand for transportation off the peak.
- People want self driving cars. As much as the dream is romantic, majority of the random people I've talked to actually don't want a self driving car. They like to drive themselves and are cautious about technology. They would welcome drive assist features but having a car without a steering wheel is a whole different story. Since the product is not even really available yet, it is not clear to me at all if the demand is there. And even if that demand is currently there, it can easily get demolished by a few tragic accidents with a horror story of somebody inside an autonomous car being killed without any ability to act (particularly if the situation that lead to the crash was actually easy to avoid by a human).
- Self driving cars may not be smart enough to handle other humans on the road but soon there will only be self driving cars on the road and therefore it will be much safer. This is really not clear since we are entering the area of nonlinear effects. Even if a single self driving cars is "safer" by some measure than a human driven car, the freeway full of self driving cars may not be any safer than a freeway full of regular cars. There are numerous scenarios, notably: somebody hacks the self driving system leading to a major disruption (best case) or a massive series of horrible accidents (worst case). Or several autonomous cars interacting together lead to horrible mistakes: e.g. cars rely heavily on other cars for lane detection. Some cars begin to drift dragging more cars behind them into a dangerous situation and so on.
- Self driving cars will be expensive but it does not matter because they will be shared. Well, ride sharing is certainly a thing, car sharing cannot really take off. People are actually quite used to their car, they have their own custom mess in there. Families have car seats, young males have their fancy audio systems and tuned engines and so on. Self driving/sharing culture is all against that. It may find its niche, but does not seem like it can easily replace our current life style.
I think before we start blowing the hype about the self driving car technology we should seriously start discussing the issues such as above. A lot of the problems with todays traffic congestion/pollution can be fixed by building better public transportation, which in the end might be cheaper, safer and easier to put together, particularly utilising some of the recent technology. The self driving car seems a little bit like a romantic dream from the Jetsons show. Even though seemingly great, deeper down may be a mirage. Not really a solution to a real problem but a costly, semi functional, hype driven caprice.
On Oct 28 2016 comma.ai decided to cancel their product comma one which was supposed to outfit some cars with a "self driving mode". The cancellation came after a very reasonable letter from NHTSA which expressed concerns over product safety.
In addition here are a few videos of Tesla running autopilot crashing into other cars:
The footage that actually cause a fatality of Tesla slamming into a white truck is not available. There is however a picture of the wreck visible here:
I think enthusiasts of self driving car should watch those movies carefully.
If you found an error, highlight it and press Shift + Enter or click here to inform us.
|
OPCFW_CODE
|
What V11.5.7 can (and can’t) do with Expression-based index
Expression-based indexes have been around since V10.5, allowing your CREATE INDEX statement to build an index using an expression, e.g.
The expression you use can be based on the data in the table (as in this example), but it doesn’t have to be. The result of the expression is stored in the index itself. This can give some significant performance improvements:
- an SQL statement could contain a predicate that includes an expression and, if it matches the expression-based key in the index, the optimizer can use the expression-based index to access the data
- the result of the expression might be in the required result set, or
- the expression might be needed in the ORDER BY
The advantage of that example can be demonstrated with a query such as
SELECT * FROM Logbook.Airfield WHERE UPPER(country) = 'SWITZERLAND'
If you look at the access path before the index was created, it shows
Not a huge estimated cost but this is a very small table and I try and avoid table scans as a rule.
Put the expression-based index in place and the access path changes to
The expression-based index is used and the Estimated Cost is only 6% of the original.
The latest version of DB2 LUW advertised a couple of improvements around Expression-based index that caught my eye, specifically removing the limitation on using RENAME and ADMIN_MOVE_TABLE on tables that have them defined (see links below for some details).
They’re related but they advertise the removal of a couple of limitations that these Expression-based index have suffered from since arriving with 10.5.
If you try and do an on-line REORG of a table that has an expression-based index on it, you get an error:
To be fair, there’s nothing in the documentation that suggests that this issue was going to be addressed; I was just hoping it was in there with the other enhancements to Expression-based index. But it’s not.
The issue here is that, until this latest release, an attempt to Rename a table with an Expression-based index, fails. Here’s an example on a V220.127.116.11 instance
As you can see, the first attempt fails but the second, after I remove the Expression-based index on the table, is successful.
The output from the same test run on a V18.104.22.168 instance (although I’m also including the output from a query to show Indexes on this table), is successful:
You can see that the Expression-based index has stayed with the table as it’s renamed and hasn’t impeded the operation. All good.
Here’s where it seems to get a bit funny. Let me just say at this juncture that I am happy to be corrected; it might be that I have got the wrong end of the stick from the advertised enhancements. But it looks to me as if the ADMIN_MOVE_TABLE procedure should now cope with an Expression-based index. If you follow the link I included in the Overview, you’ll find this
“The ADMIN_MOVE_TABLE procedure is now able to move tables having an index with an expression-based key. This feature is available when the expression does not contain qualified names.”
That qualifier in the 2nd sentence doesn’t seem to me to apply to my simple Expression-based index. But let’s compare and contrast V11.5.7 with my old V11.1.3. As I’m sure you’re aware, there are 2 ways of running ADMIN_MOVE_TABLE; you can supply the procedure with some parameters and it will build a target table for you, or you can pre-define the target table and get it to do the copying, replaying staging data and a final swap for you. Both methods should allow almost 100% access to the table as it is being changed.
V11.1.3 Method #1
This is an example of that first method; the proc is supplied with a parameter that indicates the change I want to occur to my source table. In this case I want to change the basis on which it is organized.
You don’t get a lot of detail with that failure, but it’s pretty black & white about the fact that it’s not going to work, unless you remove the table features it doesn’t like; in this case the Expression-based index.
If I drop that Expression-based index and try again, it works fine.
V11.5.7 Method #1
Now I’m going to do exactly the same operation in my latest instance, with the hope and expectation that it will cope with the fact that my source table has an Expression-based index.
Personally, I don’t think that’s a very helpful message. I spent quite a bit of time digging into this and trying to work out what the issue was. I suspect it might be connected to is this ADMIN_MOVE_TABLE FAILS WITH SQL0104N WHEN MOVING TABLE WITH DOUBLE QUOTES IN NAME WHICH HAS A STATISTICS PROFILE
I think what is happening is that it is trying to rename the Statistical View that is automatically created when you build an Expression-based index to a new, system-generated name. That name will include lower-case characters and will therefore need to be in quotes. The statistics profile might also be automatically created in order to service the Statistical View, but, according to the Apar
“Problem was first fixed in Db2 Version 22.214.171.124 (11.5 Mod 7 Fix Pack 0)”.
But the point is, it doesn’t work, and I thought the blurb said it would.
V11.1.3 Method #2
Next, it’s the pre-defined source table version. What you see in the screenshot below is
- the source table, AIRFIELD
- with an EXPRESSION-BASED INDEX defined on it
- and the automatically generated Statistical View that this brings with it
- the same error as with Method #1
V11.5.7 Method #2
Exactly the same operations are run in my new V11.5.7 instance and you can see, this time, it works
The only thing to point out is that a check of the objects, to make sure that the Procedure has successfully moved the table and the EXPRESSION-BASED INDEX, shows this:
Look at the name of that Statistical View now: that’s what makes me think the Method #1 ADMIN_MOVE_TABLE might be getting tripped up by its own processing.
I’m aware that Expression-based index haven’t gained a huge amount of traction in the DB2 customer base, but we do have customers using them with a great deal of success. So, I’m pleased to see some focus on making them more robust and making my job as a DBA easier in administering a database that includes them. But this latest enhancement looks like it might still have a few rough edges, unless, as I said earlier, I have misinterpreted the documentation.
If you think I have, please drop me a line. I’d be very happy to discuss.
|
OPCFW_CODE
|
During our yearly Hack Week event, Brightcove engineering teams from around the world join forces to build new potential product features—and creative solutions that may involve an entirely new line of products. From June 3 through 7, we held our 10th annual Hack Week, which was one of our biggest and most successful yet!
Check out this video for a behind-the-scenes look at the event:
Hack Week by the numbers
This year, 89 employees (84 engineers joined by five members of our user experience team and several product managers) worked on 39 projects across offices in:
- Sydney, Australia
- London, England
- Boston, MA
- Scottsdale, AZ
- Seattle, WA
- Guadalajara, Mexico
Six teams were completely remote and used video conferencing and online tools to coordinate their efforts across several time zones. And nine projects were directly inspired by suggestions from customers and partners at our PLAY 2019 conference.
And the winner is…
At the end of the event, we celebrated our team’s hard work and gave out several awards in each office.
The following four awards were given out in our Boston office:
- People’s Choice was awarded to the lone entry from our UX team for their prototype of a video analytics application for mobile devices—hinting at what the future of analytics might look like on the go. This project was one of several challenges taken on after direct feedback from customers and partners at PLAY 2019.
- Best Technical Accomplishment was awarded to a project that created a prototype for a new live service utilizing existing open-source technologies that could enable the next generation of live features.
- Craziest Idea was awarded to a team that explored using Amazon’s Rekognition API to extract metadata from videos to better categorize and identify the subject matter within the video—opening up the possibility of more advanced tagging and targeting of content.
- The Most Business Impact award was given to a team that experimented with a validation tool for analytics that may one day help proactively identify discrepancies and outliers in our analytics data.
Ideas from PLAY discussions
Several other projects were inspired by conversations our engineers had with our customers and partners at PLAY. One such project was a prototype for technology that automatically detects natural breaks in content and inserts ad cue points—thereby avoiding untimely interruptions in content. Another project explored the possibility of a player testing service that would enable customers to perform A/B testing of their latest players and view the results. I know I speak for the entire team when I say that we love having the opportunity to speak directly with customers and how we can continually innovate to help them succeed.
Overall, the team and I learned some really valuable lessons during this recent Hack Week, many of which we will carry forward into our work over the coming months. Keep an eye on this space to see how the ideas introduced during this event impact future projects!
|
OPCFW_CODE
|
So, I got some spare time during the summer and thought I should take a quick look at Windows Phone Development.
I actually have a few app ideas, and I was – a few years ago – able to create a Android app (from absolute scratch!). Installing IntelliJ, getting it to work with emulator, understanding app lifecycle etc – coding JAVA!. Yes a lot of new things, but in a few weeks I did have a
pretty polished LOB app up and running.
Now how hard could it be to get the same thing working for Windows Phone? I mean, C#/Visual Studio. I live and breathe it. However it turned out to be a neverending(??) nightmare.
As I have decided to use Azure VMs for development a lot more, I went the route of creating an instance of Windows Server 2012 with VS 2015 Community preinstalled.
Everything seemed to be installed, hooray – all the phone tools and sdks. So lets get started – decided to just create a new app from the templates (Hub app or whatever it was called)
First error: need a app developer license. And to be able to request that I was told needed to install the Desktop experience feature…
Finally found it. Install. Reboot box. Run again. Fine.
Now it all compiled fine, but when I tried to debug:
can’t start emulator. Cause it depends on Hyper-V. And we are running in a Hyper-V box so it’s not possible – it seems after hours of googling…
Ok, so I should run it locally on my own box instead. After struggling for a while reclaiming disc space on my old Windows 7 box I finally got VS2015 installed.
But now it doesn’t even look the same as it did on the Azure VM. Not the same project templates. The only Windows phone template I have now is “TestApplication”
and when trying to create a project from it I get “This template attempted to load component assembly ‘Microsoft.VisualStudio.SmartDevice.ProjectSystem.Base'”
I am pretty sure something about running Windows 7 is the problem…
Ok, how about just upgrading my old W7 box then…Yes, I’m waiting for the free W10 upgrade. But that wont be in yet another 12 days.
Maybe I could install the W10 preview? Been there – tried that. The preview program is not open anymore…since it’s so close to release.
Ok, so instead I decided to buy a new MSDN subscription (actually havn’t had a sub for almost 2 years I just found out). I could then quickly download and do a clean install of 8.1 or even the W10 preview I thought.
And so I did. A few hours ago I ordered the Professional subscription. However – now my order has only passed Pending and entered Confirmation status. I will not be able to download from MSDN until it is “activated”. And when checking emails from last time I now released it took 5 days!!!
Oh well. I think I will upgrade my IntelliJ IDE instead and focus on some Android development instead…
|
OPCFW_CODE
|
This document is used to construct a prototype, which is subsequently tested and iterated until it is suitable for distribution. The game is then published and made available to the general public. Most games were built by small teams of developers in the early days of game development. With the introduction of game engines and other tools, however, creation is becoming increasingly accessible to smaller teams and even individuals. As a result, a slew of new and imaginative games has emerged, as well as a burgeoning community of game makers.
They may also work independently or as part of a team while full-stack developers usually lead a team. They use their deep knowledge of software development, computer operating systems, and programming languages to help solve real-world problems. The process of designing, creating, testing, and publishing video games is known as game development. The game development process begins with an idea and progresses to a design document.
Role in a Team
Both roles have their merits and are important inside development teams. Full-stack developers handle both front-end and back-end, while software engineers specialize in various aspects of software development. Along with proper leadership, communication, and interpersonal skills, full-stack developers often possess skills in time management and attention to detail. They often prioritize tasks and work to meet the client’s deadline.
The work of a full-stack developer is all-encompassing, so, as you can expect, it involves a lot of different aspects. However, caring for the website or web-based application is at the core of all this. This involves all the tasks that go into creating and maintaining it. Software engineers are also responsible for designing the software architecture, which involves making decisions about the overall structure and organization of the system.
Full-Stack vs. Sofware Engineer: Other Top Differences
According to Stack Overflow Developer Survey 2021, full-stack developers were up to 49% of the population of more than 66,000 developers. Additionally, software developers made up more than 38% of the population. From games and business applications to network control systems and operating systems, Software Engineers can work on all types of projects. When managing clients, databases, servers, system developments, etc., come as a package, the first thing that pops up in the manager’s mind is approaching a Full Stack Developer.
Also, we have talked about the qualities needed for becoming a Software Engineer and a Full Stack Developer, their job, and the salaries they earn on average. As discussed earlier, a Full-Stack Developer is fully loaded with the knowledge of all the latest platforms and frameworks required in developing a Web Application. According https://wizardsdev.com/en/vacancy/senior-full-stack-developer-nodejs-react/ to Glassdoor, the estimated total pay for a software engineer in the United States is around $116,967 per year. Get started with TechRepublic Academy’s FullStack Web Developer Bundle. It’s important to understand what makes these professionals unique from one another and why both are an asset to your tech team or company.
Full-Stack Developer vs Software Engineer
On the other hand, software engineers usually specialize in just one domain. Full-stack developers have a range of core competencies that enable them to work across the entire stack. Full-stack developers work on database development and implementation, server configuration, client coding, and quality assurance testing. They may also create user interfaces (UI) that facilitate data input/output.
Full-stack developers are in charge of designing the user interface, developing the logic, creating the code, and testing a program. Software engineers concentrate on creating front-end or back-end design concepts. Software developers might earn more due to specialized expertise, but it can vary based on skills, experience, and job responsibilities.
Average salary of a full stack developer
Using their expertise, these developers help build prototypes for the new projects quicker based on the client’s needs and specifications. Front-end and back-end development are the key components that maintain optimal system function in applications and websites. While full-stack is responsible for both client-side and server-side, front-end or back-end developer is in charge of a specific area. They obviously have absorbed an intensive knowledge of their expert.
- Software Engineers and Full Stack Developers have overlapping skills despite their similarities.
- The average salary of a software engineer is around $106,746 per year.
- In simple terms, software development is a part of software engineering.
- Discover if this is the right career path for you with a free virtual work experience.
- Learning full-stack development can set you up for long-term career success in many ways.
By definition, full-stack engineers are both web developers and designers. Full stack developers and full stack engineers typically require distinct skill sets to excel in their jobs. Full stack developers may require additional project management skills compared to full stack engineers.
See what it’s like to work in software engineering for a major bank with JPMorgan’s free job simulation. Yes, a Full Stack Developer can take the required courses in software engineering to gain the required knowledge. Full stack development is under software engineering; a Full Stack Developer is a Software Engineer’s offshoot. If you’re not resolute, choosing between software engineering and full stack development can be very tricky.
For instance, a software engineer can create a native app for different platforms, including desktops, mobile devices, or even TV sets. The programming languages they often use are Java, C#, and Swift, as well as more general languages like C++. Full stack software engineers are can work across full stack thanks to a variety of crucial skills.
|
OPCFW_CODE
|
What are these messages from, and is this a normal thing or a sign of something bad happening? However, this is a binary file. ErrorPossible IssuesSolution 0x800706BA–HRESULT_FROM_WIN32(RPC_S_SERVER_UNAVAILABLE) Firewall issue or server not available. Some Event Log, such as the Security Event Log, may be protected by User Access Controls (UAC).
Locate the Trace channel log for WMI under Applications and Service Logs | Microsoft | Windows | WMI Activity. Type cscript filename.vbs > outfile.txt at the command prompt to redirect the output of the filename.vbs script to outfile.txt. The following table lists script examples that can be used to For more information about channels, see Event Logs and Channels in Windows Event Log. The WMI event source is Microsoft-Windows-WMI.
Windows Driver Model (WDM) providers continue to log in the Wbemprov.log file. Since the error code is included in the second call, the server doesn't respond correctly and event logs don't end up getting pulled into the server.So my questions are as follows:-Has For more information, see Troubleshooting WMI Client Applications. Wdm Call Returned Error: 4200 Are 14 and 21 the only "interesting" numbers?
Include the Security privilege when connecting to the Win32_NTEventlogFile class. The content you requested has been removed. The tool requires information stored in some associated files. Source The query I am using is Select * from Win32_ComputerSystem - which returns the name of the machine: If I look in the Log Folder directory, there are several log files
GroupOperationID indicates the sequence in which the event occurs. Wmi Activity 5858 Type tracefmt -tmf %systemroot%\system32\wbem\tmf\wmi.tmf -o OUTPUT.TXT %systemroot%\system32\wbem\logs\WMITracing.log. Note By default, cscript displays the output of a script in the command prompt window. This will create a file named wmi.tmf that includes the contents of all of the other .tmf files.
You may need to include the Backup privilege when connecting to WMI. http://answers.microsoft.com/en-us/windows/forum/windows_7-performance/wmi-error-under-event-viewer-with-error-code/aee1e6c9-28d5-4871-908b-8ed42a36a96e The possible values are: 0 - No Logging / Disabled 1 - Log Errors Only 2 - Verbose Logging As indicated above, changes to the logging level take place immediately and Wmi Logging Windows 2008 R2 e.g. Wmiprov.log Location This documentation is archived and is not being maintained.
Related topics WMI Troubleshooting Tracing WMI Activity Logging WMI Activity Show: Inherited Protected Print Export (0) Print Export (0) Share IN THIS ARTICLE Is this page helpful? I have more than 100 servers and enable logging on that way will take å long time to complete. © 2016 Microsoft Corporation. Looks like it starts up every 2 minutes, then shuts down about 10 seconds later. Instead, it uses Event Tracing for Windows (ETW) and events are available through Event Viewer or the Wevtutil command-line tool. View Wmi Logs
Please try the request again. Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! To run a script Copy the code and save it in a file with a .vbs extension, such as filename.vbs. How do I...WMI classes or methods ...retrieve information about the Security event log?
Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! Winmgmt.log Location Renaming form controls and underlying code Can Fireballs be saved for later in the Bag of Holding? The script examples shown in this topic obtain data only from the local computer.
Thanks! –Carlton Jenke Feb 3 '10 at 14:55 add a comment| up vote 6 down vote The problem is caused by some other system (often SCOM) querying WMI-PA; when this happens, more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed To view a WPP-based WMI trace To create the single .tmf file, open an elevated Command Prompt window and navigate to the %SystemRoot%\System32\wbem\tmf directory. Win32_nteventlogfile You can configure what information is included by setting the TRACE_FORMAT_PREFIX environment variable.
Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! If I receive written permission to use, without citing, a paper, is it plagiarism? When the file size exceeds this value, the file is renamed to ~filename and a new, empty log file is created. One occurrence for each sequence.
This integer value must be in the range 1024 to 2^32-1. In Location:, type the path to the log file folder and in Maximum size (bytes):, set the maximum size, in bytes, of the log file. User indicates the account that makes a request to WMI by running a script or through CIM Studio. The event fields for an Event 2 are: GroupOperationID indicates the sequence in which the event occurs.
Then go to DCOM Config, find "Windows Management Instrumentation", and give the user you want Remote Launch and Remote Activation. Event 2 Events that make up the operation. Use the Win32_NTEventlogFile class and check the value of the NumberOfRecords property. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the
For more information about specific logs, see WMI Log Files. The following procedure describes how to run a script. WMI events appear in the event window for WMI-Activity. Only a user with administrative privileges could access the WMI Logs folder.
Failures can originate in other parts of the operating system and emerge as errors through WMI. In many of these cases, the WMI provider may be hanging or is consuming an inordinate amount of resources. To enable logging, open the Computer Management MMC snap-in, expand the Services and Applications section and select WMI Control as shown in the image below: Right-click on WMI Control and Developer Network Developer Network Developer Sign in MSDN subscriptions Get tools Downloads Visual Studio MSDN subscription access SDKs Trial software Free downloads Office resources SharePoint Server 2013 resources SQL Server 2014
Right-click My Computer-> Properties.
|
OPCFW_CODE
|
Keras performance with OpenBlas Multi-Core
Problem
I am unable to force Keras + Theano to use more than 1 thread using OpenBLAS. Script used:
import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
# this returns a tensor
inputs = Input(shape=(1024000,))
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(256, activation='relu')(inputs)
x = Dense(256, activation='relu')(x)
predictions = Dense(10000, activation='softmax')(x)
model = Model(input=inputs, output=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
for i in range(100000):
print i
model.predict(np.random.rand(1, 1024000))
Command used to run this script
THEANO_FLAGS='device=cpu,openmp=True,blas.ldflags=-lopenblas' OMP_NUM_THREADS=12 KERAS_BACKEND=theano python simple_model.py
Any help will be great. Thanks.
System setup
CPU with 12 cores
BLAS using OpenBLAS
Theano Bleeding Edge installation
Due Diligence
By using htop confirmed that OpenBLAS is using all cores by using Theano check_blas script.
By using htop Confirmed that Numpy is using OpenBLAS and all cores by using following script:
import numpy as np
def test_eigenvalue():
i = 500
data = np.random.rand(i, i)
np.linalg.eig(data)
for i in range(100000):
test_eigenvalue()
Hello,
Try using multiple samples :
data = np.random.rand(100, 1024000)
for i in range(100000):
print i
model.predict(data)
Then your code and your way to start it use multi-core for me.
Hi @unrealwill
Thanks for responding. I tried using multiple samples but still seeing 2 threads.
Can you share some details of your system setup. Maybe I am missing something ?
Keras is just a wrapper of theano.
The issue is probably with Theano.
Have you succeeded in running openmp with theano.
By looking at
https://github.com/fchollet/keras/issues/1245
seems that some people may have had success with adding :
import theano
theano.config.openmp = True
Theano Test Code
import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
import os
import theano
import theano.tensor as T
theano.config
print os.getpid()
def test_eigenvalue():
i = 500
data = np.random.rand(i, i)
np.linalg.eig(data)
def test_theano():
a = theano.shared(np.ones((100, 256), dtype=theano.config.floatX, order="C"))
b = theano.shared(np.ones((256, 256), dtype=theano.config.floatX, order="C"))
f = theano.function([], outputs=[T.dot(a, b)])
for i in range(200000):
print i
f()
def test_keras():
inputs = Input(shape=(5000,), dtype=theano.config.floatX)
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(256, activation='relu')(inputs)
x = Dense(256, activation='relu')(x)
predictions = Dense(10000, activation='softmax')(x)
# this creates a model that includes
# the Input layer and three Dense layers
model = Model(input=inputs, output=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
data = np.random.rand(100, 5000).astype(theano.config.floatX)
for i in range(100000):
print i
model.predict(data)
test_theano()
Run command
THEANO_FLAGS='device=cpu,openmp=True,blas.ldflags=-lopenblas' OMP_NUM_THREADS=12 KERAS_BACKEND=theano python simple_model.py
HTOP Output
Yes I am using openmp=True, its part of THEANO_FLAGS.
I removed my theanorc file.
~/.keras/keras.json is {"epsilon": 1e-07, "floatx": "float32", "backend": "theano","image_dim_ordering" : "th"}
test_keras() uses all threads for me.
I'm on ubuntu 16.04, I only have machines with GPU.
I installed keras and theano with pip .
I installed openblas via apt-get libopenblas-dev
Thanks for you time @unrealwill
My system specifications
Ubuntu 16.06
Machine with GPU
libopenblas-dev
-Theano installed using PIP
I did not use PIP for Keras.. Maybe that is causing it to be not setup properly. Let me try it and report back.
No dice. Same behavior. I think I will try to compile the dot product via Keras wrappers and see what that does.
Thanks again @unrealwill. If you think of anything else let me know.
Solved it. Concretely, Theano multi-core behavior is based on Tensor sizes. Some information here http://deeplearning.net/software/theano/tutorial/multi_cores.html. Look at the variable openmp_elemwise_minsize. In current case, I was not passing in batch size and as such internally the data was fed to Theano with batch size 32 which Theano decided was not good for multi-core processing. However as soon as I changed the batch size to 100 it triggered multi-core.
import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
import os
import theano
import theano.tensor as T
theano.config
print os.getpid()
def test_eigenvalue():
i = 500
data = np.random.rand(i, i)
np.linalg.eig(data)
def test_theano():
a = theano.shared(np.ones((100, 256), dtype=theano.config.floatX, order="C"))
b = theano.shared(np.ones((256, 256), dtype=theano.config.floatX, order="C"))
f = theano.function([], outputs=[T.dot(a, b)])
for i in range(200000):
print i
f()
def test_keras():
inputs = Input(shape=(5000,), dtype=theano.config.floatX)
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(256, activation='relu')(inputs)
x = Dense(256, activation='relu')(x)
predictions = Dense(10000, activation='softmax')(x)
# this creates a model that includes
# the Input layer and three Dense layers
model = Model(input=inputs, output=predictions)
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
data = np.random.rand(100, 5000).astype(theano.config.floatX)
for i in range(100000):
print i
model.predict(data, batch_size=100)
test_keras()
Run Command
THEANO_FLAGS='device=cpu,openmp=True,blas.ldflags=-lopenblas' OMP_NUM_THREADS=12 KERAS_BACKEND=theano python simple_model.py
HTOP Output
Ok I had only 8 cores on my machine which probably explain the different behaviors.
Glad you solved it :)
@unrealwill Thanks for the help.
Hi all,
I have the same issue (I am unable to force Keras + Theano to use more than 1 thread using OpenBLAS.) I have followed the whole procedure here, It works on
data = np.random.rand(100, 1024000) for i in range(100000): print i model.predict(data)
but it doesn't work on :
`
THEANO_FLAGS='device=cpu,openmp=True,blas.ldflags=-lopenblas'
OMP_NUM_THREADS=8
KERAS_BACKEND='theano'
import pandas as pd
import numpy as np
from tqdm import tqdm
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM, GRU
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
import keras.engine.topology
from keras.layers import TimeDistributed, Lambda
from keras.layers import Convolution1D, GlobalMaxPooling1D
from keras.callbacks import ModelCheckpoint
from keras import backend as K
from keras.layers.advanced_activations import PReLU
from keras.preprocessing import sequence, text
data = pd.read_csv('data/quora_duplicate_questions.tsv', sep='\t')
y = data.is_duplicate.values
tk = text.Tokenizer(nb_words=200000)
max_len = 40
tk.fit_on_texts(list(data.question1.values) + list(data.question2.values.astype(str)))
x1 = tk.texts_to_sequences(data.question1.values)
x1 = sequence.pad_sequences(x1, maxlen=max_len)
x2 = tk.texts_to_sequences(data.question2.values.astype(str))
x2 = sequence.pad_sequences(x2, maxlen=max_len)
word_index = tk.word_index
ytrain_enc = np_utils.to_categorical(y)
embeddings_index = {}
f = open('data/glove.840B.300d.txt')
values_conv = []
for line in tqdm(f):
values = line.split()
for item in values:
try:
values_conv.append(float(item))
except:
pass
word = values_conv[0]
coefs = np.asarray(values_conv[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
embedding_matrix = np.zeros((len(word_index) + 1, 300))
for word, i in tqdm(word_index.items()):
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
`
|
GITHUB_ARCHIVE
|
import numpy as np
import os, fnmatch
import pdb
import torch
def RGB2HSD_GPU(X):
X = X.permute(0,2,3,1)
eps = torch.tensor([np.finfo(float).eps]).cuda()
X = torch.where(X==0,eps,X)
OD = -torch.log(X / 1.0)
D = torch.mean(OD,X.ndim - 1)
D = torch.where(D == 0.0, eps, D)
cx = OD[:,:,:,0] / (D) - 1.0
cy = (OD[:,:,:,1]-OD[:,:,:,2]) / (torch.sqrt(torch.tensor([3.0])).cuda()*D)
D = D.unsqueeze(3)
cx = cx.unsqueeze(3)
cy = cy.unsqueeze(3)
X_HSD = torch.cat((D,cx,cy),3)
return X_HSD.permute(0,3,1,2)
def RGB2HSD(X):
"""See appendix A and B van_der_laak_et_al_Cytometry_2000"""
assert X.max() > 50, "WARNING: RGB2HSD requires image in [0,255]"
# eps = np.finfo(float).eps
# above gave overflow
eps = 1e-10
X[np.where(X==0.0)] = eps
OD = -np.log(X / 1.0)
D = np.mean(OD,-1)
D[np.where(D==0.0)] = eps
cx = OD[:,:,0] / (D) - 1.0
cy = (OD[:,:,1]-OD[:,:,2]) / (np.sqrt(3.0)*D)
D = np.expand_dims(D,-1)
cx = np.expand_dims(cx,-1)
cy = np.expand_dims(cy,-1)
X_HSD = np.concatenate((D,cx,cy),-1)
return X_HSD
# def HSD2RGB(X_HSD):
# X_HSD_0, X_HSD_1, X_HSD_2 = tf.split(X_HSD, [1,1,1], axis=3)
# D_R = (X_HSD_1+1) * X_HSD_0
# D_G = 0.5*X_HSD_0*(2-X_HSD_1 + tf.sqrt(tf.constant(3.0))*X_HSD_2)
# D_B = 0.5*X_HSD_0*(2-X_HSD_1 - tf.sqrt(tf.constant(3.0))*X_HSD_2)
# X_OD = tf.concat([D_R,D_G,D_B],3)
# X_RGB = 1.0 * tf.exp(-X_OD)
# return X_RGB
def HSD2RGB_Numpy(X_HSD):
"""See appendix A van_der_laak_et_al_Cytometry_2000"""
X_HSD_0 = X_HSD[...,0]
X_HSD_1 = X_HSD[...,1]
X_HSD_2 = X_HSD[...,2]
D_R = np.expand_dims(np.multiply(X_HSD_1+1 , X_HSD_0), -1)
D_G = np.expand_dims(np.multiply(0.5*X_HSD_0, 2-X_HSD_1 + np.sqrt(3.0)*X_HSD_2), -1)
D_B = np.expand_dims(np.multiply(0.5*X_HSD_0, 2-X_HSD_1 - np.sqrt(3.0)*X_HSD_2), -1)
X_OD = np.concatenate((D_R,D_G,D_B), axis=-1)
X_RGB = 1.0 * np.exp(-X_OD)
return X_RGB
def image_dist_transform(img_hsd, mu, std, gamma, mu_tmpl, std_tmpl, args):
batch_size = args.batch_size
img_norm = np.empty((batch_size,args.img_size, args.img_size, 3, args.nclusters))
mu = np.reshape(mu, [mu.shape[0] ,batch_size,1,1,3])
std = np.reshape(std,[std.shape[0],batch_size,1,1,3])
mu_tmpl = np.reshape(mu_tmpl, [mu_tmpl.shape[0] ,batch_size,1,1,3])
std_tmpl = np.reshape(std_tmpl,[std_tmpl.shape[0],batch_size,1,1,3])
for c in range(0, args.nclusters):
img_normalized = np.divide(np.subtract(np.squeeze(img_hsd), mu[c, ...]), std[c, ...])
img_univar = np.add(np.multiply(img_normalized, std_tmpl[c, ...]), mu_tmpl[c, ...])
# img_univar = np.add(np.zeros_like(img_norm), mu[c,...])
img_norm[..., c] = np.multiply(img_univar, np.tile(np.expand_dims(np.squeeze(gamma[..., c]), axis=-1), (1, 1, 3)))
img_norm = np.sum(img_norm, axis=-1)
# Apply the triangular restriction to cxcy plane in HSD color coordinates
img_norm = np.split(img_norm, 3, axis=-1)
img_norm[1] = np.maximum(np.minimum(img_norm[1], 2.0), -1.0)
img_norm = np.squeeze(np.swapaxes(np.asarray(img_norm), 0, -1))
## Transfer from HSD to RGB color coordinates
X_conv = HSD2RGB_Numpy(img_norm[np.newaxis,...])
X_conv = np.minimum(X_conv,255.0)
X_conv = np.maximum(X_conv,0.0)
# X_conv *= 255
X_conv = X_conv.astype(np.uint8)
return np.squeeze(X_conv)
|
STACK_EDU
|
Myke Predko-Programming and customizing PIC microcontroller - [PDF Document]In order to make it as easy to use as possible as well as being the most effective PICmicro reference possible, I have created this html interface which includes: Instructions for installing the files and development tools Source and Executable code to all Experiments , Projects and Code "Snippets" and Macros Presented in the book Two appendices in pdf format, Introduction to Electronics and Introduction to Programming Datasheets for the PICmicro MCU part numbers used in this book Internet links to Manufacturers and Web Sites with additional Information for your PICmicro project development For more current information, I recommend that you periodically look at my web page as well as the web pages presented in the book and on this CD-ROM. As hard as I try, I know that there will be mistakes and typos in this book. Corrections will be listed and download links will be provided on my web page. Along with errata, you should check my web page for new projects and information. If you have found any useful sites that you think I should list, please send me an email with the link. One of the concerns that I received from the first edition of this book "Programming and Customizing the PIC Microcontroller" was that I did not include any information for beginner programmers and electronic designers. In order to rectify this situation, I have included two appendices, which for space reasons have been stored on this CD-ROM as Adobe Acrobat "pdf" files.
Programming and Customizing the PIC Microcontroller By Myke Predko E-Book
The board will be designed for single-sided assembly, but I will be including a "Top Side" pattern for proto-shops like AP Circuits. A wealth of information. Verified Purchase? Power supply design is an art unto itself - if you use something different and your programmer doesn't work, I will insist that you change it to my design before I'm willing to answer any questions.
This application will encompass both the "El Cheapo" Programmer software as well as the "El Debug" test application. This will cause the configuration information to be stored in the. These two silicon diodes will each "raise" the ground reference to the voltage regulator by 0? Tara Vishin.
Embed Size px x x x x INI Stimulus. While my name is on the cover of this book, this edition as well as the first would not have been possible without the generous help of a multitude of people and companies. This book is immeasurably richer due to their efforts and suggestions. I have over one thousand emails between myself and various individuals consisting of suggestions and ideas for making this second edition better than the first-I hope I have been able to pro- duce something that is truly useful. Cmicro MCU probably the best supported and most interesting chips available on the market today. While I could probably fill several pages of names listing everyone who has answered my questions and made suggestions on how this second edition could be better, I am going to refrain in fear that I will miss someone.
The second aspect of "speed" is the speed of the PC or workstation that you are working with. For a prefko to be considered suitable for "Production", it must be able to program and verify the part at different voltage levels! Another area of "User Protection" I consider important is the restriction of making simple code changes pdogramming "patches" to an application. The last type of programmer available is one which you assemble from plans from magazines or the internet.
For microcontrollers in a modern environment, enter mobile phone number. One area that many people look to "improve" upon is the power supply circuitry. The "Gerber" files for this board will be placed anf this web page as well as my own. To get the free app, I don't believe that this is still true.STEP 7 consists proyramming a number of instructions that must be arranged in a logical order More information. Elcheapo Build Debugger Since the article has been out for the "El Cheapo" programmer, I just wanted to take a bh and review a few of the different programmers that are available on the market before presenting a very simple one you can build in an evening with many parts you probably have around your workbench. With the different issues discussed, I have gotten a few questions from people who have had problems with getting the programmer to work. P Abstract The aim of this project is to inform More information.
ZIP Format. INI Stimulus. After doing this the voltage regulator will output I am very interested in hearing what you have to say about the book and I have been keeping track of corrections and ;df them along to McGraw-Hill for future printings.
|
OPCFW_CODE
|
Everyone that authors an R package is curious about how many users download it. As far as I know there’s still no way to get information on all the downloads, from all the R mirrors. Here I’m using package cranlogs, which only gives information on the downloads from the R Studio mirror. It also does not allow to now from where in the world these downloads were made. However, it has a major advantage: speed! The package cranlogs provides a easy (and way faster) method to get this information without having to download all the log files (which can take a long time).
I have written this little script, which I use to keep track of my packages’ downloads (here I’m using MetaLandSim as an example).
First of all let’s load all the required R packages:
#install.packages("cranlogs") library(cranlogs) library(ggplot2)
If we want to know about last week’s downloads:
#Last week's downloads cran_downloads(packages="MetaLandSim", when="last-week") ## date count package ## 1 2019-03-30 7 MetaLandSim ## 2 2019-03-31 7 MetaLandSim ## 3 2019-04-01 11 MetaLandSim ## 4 2019-04-02 30 MetaLandSim ## 5 2019-04-03 30 MetaLandSim ## 6 2019-04-04 19 MetaLandSim ## 7 2019-04-05 11 MetaLandSim
Or about the overall downloads (the last date has to be the previous day):
#How many overall downloads mls <- cran_downloads(packages="MetaLandSim", from = "2014-11-09", to = Sys.Date()-1) sum(mls[,2])
So… the number of downloads MetaLandSim has is…
## 21868
We can now plot the resulting graph, depicting the daily downloads:
#Plot gr0 <- ggplot(mls2, aes(mls2$date, mls2$count)) + geom_line(colour = "red",size=1) gr0 + xlab("Time") + ylab("Nr. of downloads") + labs(title = paste0("MetaLandSim daily downloads ", Sys.Date()-1))
Or we can plot the cumulative downloads sum to get an idea about the rate of increase in download numbers:
#Cumulative cumulative <- cumsum(mls[,2]) mls2 <- cbind(mls,cumulative) #Plot gr1 <- ggplot(mls2, aes(mls2$date, mls2$cumulative)) + geom_line(colour = "blue",size=1) gr1 + xlab("Time") + ylab("Nr. of downloads") + labs(title = paste0("MetaLandSim cumulative downloads until ", Sys.Date()-1))
|
OPCFW_CODE
|
Database SQL to Maple Type Conversions
SQL types without a Maple Equivalent
SQL to Maple
Maple to SQL
Many of the SQL types have Maple equivalents. However, a few types do not map easily to Maple. In those cases, Database attempts to generate the best match possible. Descriptions of the matches follow.
As Maple has no time or date type, the DATE, TIME, and TIMESTAMP SQL types need a representation. Database uses the number of seconds since epoch, stored as a numeric. With Maple arbitrary precision floats, this representation unifies the three SQL types into one Maple type with enough accuracy to support TIMESTAMP's nanosecond precision. Notice that these types have a limited accuracy (most notably DATE, which is only accurate to the day, a range of 86400 seconds), so the numeric that is passed for one of these formats may be different from the value that is returned.
Database represents BINARY, VARBINARY, and LONGVARBINARY as one dimensional Arrays of type integer.
The following table shows how an SQL type is returned at the Maple level. This occurs when retrieving data using GetData and ToMaple.
Array( datatype=integer )
When sending data to SQL, Maple attempts to determine the correct SQL type to use. This section discusses the conversion process and how to override the automatic detection. These conversions are used in calls to PreparedStatement[Execute], UpdateData, and InsertRow.
Integers are converted to TINYINT, SMALLINT, INTEGER, BIGINT, or NUMERIC, depending on the value of the integer.
Numerics are converted to DOUBLE or NUMERIC, depending on the value of the numeric. Only the magnitude, and not the accuracy, of the number is used when determining which type to use. If the type chosen by Database does not have sufficient accuracy, then an explicit cast must be used. A numeric value representing a DATE, TIME, or TIMESTAMP requires an explicit cast.
Strings are converted to VARCHAR or LONGVARCHAR, depending on the length of the string.
Values of type Array( integer ) are converted to VARBINARY or LONGVARBINARY, depending on the length of the array.
If the SQL type that Database selects is incorrect, you can override this behavior by using an explicit cast. To perform a cast, use :: and the SQL type name you want to cast to. For example, 1::DOUBLE or 12346789::TIMESTAMP. Specifying an illegal cast is an error.
The Result module UpdateData and InsertRow commands are able to look up column types, so using explicit casts is not required. However, looking up column types can require database access. This can introduce unacceptable overhead for these commands. To avoid this overhead, use an explicit cast.
To enter an SQL NULL as a data item, you can pass 'SQLNULL' into any of the functions that allow for inputting of data.
Getting data from a column containing an SQL NULL will return Maple NULL.
Download Help Document
|
OPCFW_CODE
|
In this course you will learn how to program in Java, how to use Java EE as a server-side scripting language for web development, how to connect to MySQL database, how to build a real life web application from scratch and project deployment. In the Capstone Project, you will apply what you’ve learnt to develop real-world interactive web application.
Module 10: Web Programming
- Understand how to build a website with HTML & CSS
- Understand Setting Development Environment for JSP
- Understand how to Use Github
- Understand how to Build Web Application with JSP and Servlet
- Understand the Concepts of Client Side Server Side
- Understand the HTTP and the Web Request-Response Cycle
- And Many More …
Module 11: Web Application Development
- Understand the Concepts of Java EE
- Understand how to Back Beans and Life Cycles
- Understand JPA Database Connection
- Understand Security Realms with Glassfish
- Understand JSF Resource Management, Resource Bundles and Internationalization
- Understand Automated deployment with Maven
- Understand RESTful Web Services with Jax-RS
- And Many More …
- Week 10: Web Programming – Module 10
- Week 11: Web Application Development – Module 11
- Week 12: Web Application Development with Java – Module 11
Topic: Online Shopping Store
Online Store has gain ground as an accepted and used business paradigm. More and more business are implementing web sites providing functionality for performing commercial transactions over the web. It is reasonable to say that the process of shopping on the web has come to stay. The objective of this project is to develop an online store where product like clothes, electronics, and other similar product can be bought from the comfort of home through the Internet. However, for implementation purposes, this project will deal with an online shopping for mobile phones.
An online store is a virtual store on the Internet where customers can browse the catalog and select products of interest. The selected items may be collected in a shopping cart. At checkout time, the items in the shopping cart will be presented as an order. At that time, more information will be needed to complete the transaction. Usually, the customer will be asked to fill or select a billing address, a shipping address, a shipping option, and payment information such as credit card number (though in this project we will not integrate our application to any payment platform). An e-mail notification is sent to the customer as soon as the order is placed.
1. Overall Description
- Only registered user can purchase products
- Contact Us page is available to contact Admin for queries.
- There are three roles available: Visitor, User and Admin
- Visitor can view available products
- User can view and purchase products
- An Admin has some extra privilege including all privilege of visitor and user
- Admin can add/remove products and edit product information.
- Admin can add user, edit user information and can remove user.
- Admin can ship order to user based on order placed by sending confirmation mail
1.1 Development Tools
- Back-end development – Java EE, Database – Mysql
- Editor/IDE – Use editor of your choice.
- Server – Local or remote
1.2 Web Pages details:
- Home Page
- About Us Page
- Product Page
- Checkout Page
- Cart page
- Single Product Page
- Contact Us Page
- Admin Page
- Logins Page
- Register Page
1.3 Main Menu: Aside the menus that is listed below other pages shouldn’t appear on the main menu, they should only be links to their related content, so that you can navigate in the process of browsing or transaction.
- About Us
- Contact Us
1.4 Project Detail:
Anyone can view Online Shopping portal and available products, but every user must login by his/her Username and password in order to purchase or order products. Unregistered members can register by navigating to registration page. Only Admin will have access to modify roles, by default developer can be an ‘Admin’. Once you (user) register on the site, your default role will be ‘User’.
- Home Page: The Home page should be informative enough to sell your products
- Products page: This should display all products in a gallery format with centred short description, price and Add to Cart button under each product. Each product image should be link to “Product single page” where you can view product details.
- Single Product Page: In the Single Product page user should be able to increase product quantity – the default quantity should be 1.
- Checkout Page: Unregister users should have option to register on this page otherwise they will not be able to checkout. When “Checkout” button is clicked ( by registered user) it should print the transaction detail to a page, which will include: customer name, reference number, shipping address, customer email, phone number and display message for either successful or failed transaction, if failed the reason for the failure should be stated.
- Cart Page: All product added to cart should display on this page, on the cart page users should be able to remove cart and increase product quantity as will.
- Admin pages: The admin page should only be access by typing it URL into the web browser, which will land you on the Login page. The admin dashboard should contain summary of recent transaction (ie: report) and the following menus:
- Order – this should display the log of all transactions
- Product – List of all existing product with their status (Active or Inactive) and an option to edit or delete. Admin should be able to add new product.
- User – Admin should be able to View, Add, Edit, and Delete users.
- Note that admin should be able to navigate from the admin page to the main site.
A good shopping cart design must be accompanied with user-friendly shopping cart application logic. It should be convenient for the customer to view the contents of their cart and to be able to remove or add items to their cart. The Online Store application described in this project provides a number of features that are designed to make the customer more comfortable.
This project aim at helping you to understanding the creation of an interactive web pages and the technologies used to implement them.
Project Duration: 1 Month
- Lectures 84
- Quizzes 0
- Skill level All levels
- Language English
- Students 13
- Assessments Yes
Module 10: Web Programming
- Basic CSS – How to build a website with HTML & CSS
- How to Create Layout – HTML & CSS
- Responsive Design with Bootstrap 3
- More of CSS Tutorial
- How to put your website online – how to FTP to a domain & upload files to a webhost
- JS Tutorial – If Else & Comparison Operators – Copy
- Github Tutorial For Beginners – Github Basics for Mac or Windows & Source Control Basics
- GITHUB PULL REQUEST, Branching, Merging & Team Workflow
- Basic Terminal Usage – Cheat Sheet to make the command line EASY
- How the Internet Works for Developers – Pt 1 – Overview & Frontend
- Java Server Pages -1- JSP Introduction
- Java Server Pages -2- How To Take This Course
- Java Server Pages -3- JSP and Servlets Overview
- Java Server Pages -4- JSP Setup Dev Environment Overview
- Java Server Pages -5- Installing Tomcat for MS Windows
- Java Server Pages -6- Install Tomcat on Mac
- Java Server Pages -7- Install Eclipse on MS Windows
- Java Server Pages -8- Install Eclipse on Mac
- Java Server Pages -9- Connecting Eclipse to Tomcat
- Java Server Pages -10- JSP Hello World
- Java Server Pages -11- JSP Expressions
- Java Server Pages -12- JSP Scriptlets
- Java Server Pages -13- JSP Declarations
- Java Server Pages -14- Call Java class from JSP
- Java Server Pages -15- JSP Built-In Server Objects – Copy
- Java Server Pages -16- Including Files with JSP
- Java Server Pages -17- HTML Forms Overview
- Java Server Pages -18- JSP Forms – Write some JSP code
- Java Server Pages -19- JSP Forms Drop Down List
- Java Server Pages -20- JSP Forms Radio Buttons
- Java Server Pages -21- JSP Forms Checkbox Part 1
- Java Server Pages -22- JSP Forms Checkbox Part 2
- Java Server Pages -23- Cookies with JSP – Part 1
- Java Server Pages -24- Cookies with JSP – Part 2
- Java Server Pages -25- Cookies with JSP – Part 3
- Java Server Pages -26- HelloWorld Servlet Overview
- Java Server Pages -27- HelloWorld Servlet – Write some Code
- Java Server Pages -28- Comparing JSP and Servlets
- Java Server Pages -29- Read HTML Form Data with Servlets – Part 1
- Java Server Pages -30- Read HTML Form Data with Servlets – Part 2
- Java Server Pages -31- Differences between GET and POST
- Java Server Pages -32- Special Offer – Keep Learning
Module 11: Full Stack Web Application Development
- Java EE with MySQL -1- Starting with Glassfish
- Java EE with MySQL -2- Backing Beans and Life Cycles
- Java EE with MySQL -3- JPA Database Connection Part 1
- Java EE with MySQL -4- JPA Database Connection Part 2
- Java EE with MySQL -5- Security Realms with Glassfish Part 1
- Java EE with MySQL -6- Security Realms with Glassfish Part 2
- Java EE with MySQL -7- JSF Resource Management
- Java EE with MySQL -8- Resource Bundles and Internationalization
- Java EE with MySQL -9- EJB Timer Service
- Java EE with MySQL -10- SOAP Web Services with Jax-WS
- Java EE with MySQL -11- JSF Navigation
- Java EE with MySQL -12- Using Ajax with JSF
- Java EE with MySQL -13- Built-in JSF Validation
- Java EE with MySQL -14- Maven
- Java EE with MySQL -15- Automated deployment with Maven
- Java EE with MySQL -16- Resource deployment with Maven
- Java EE with MySQL -17- JavaMail Session
- Java EE with MySQL -18- JNDI Resources
- Java EE with MySQL -19- Preparing for Java EE 7
- Java EE with MySQL -20- JSF Faces Flow
- Java EE with MySQL -21- Configure the Flow
- Java EE with MySQL -22- JSF File Upload
- Java EE with MySQL -23- RESTful Web Services with Jax-RS
|
OPCFW_CODE
|
Using 了 for referring to habits
So far, I've learned that 了 is an aspect marker, not a tense marker. That is, 了 indicates "the aspect of finishing a task, regardless of tense".
However, during my studies, so far I have seen 了 in sentences which refer to a specific past event. But recently, I come across some sentences with 了 that don't refer to specific past events, but they refer to regular events.
So I want to know if it is correct to use 了 in such cases.
For example:
不少广东老人的每一天都从茶楼开始。他们都起得很早,五点就出来散步,锻炼身体,六点钟就到了茶楼。那儿老人很多,他们跟认识的人问好。王先生每天都带抱来。 他喜欢喝茶,等女儿和小孙女儿来。他等了一会儿,她们都来了。
As one can see, this passage is about habits of Guangdong people and the things they do regularly on a daily basis. It is not about a specific past event.
So, is it correct to translate
"六点钟就到了茶楼"
as
"(Regularly) by six o'clock, they have arrived the teahouse" (It refers to a daily habit, not a specific past event)
also, is it correct to translate
"他等了一会儿,她们都来了"
as
"(On a daily basis) after having waited for a while, they arrive. (and the action of arriving finishes)"
When "了" is placed after a "verb", it indicates the "completion of such act", so the translation should be:
"六点钟就到了茶楼" - "arrived at the tea house as early as six o'clock". Note the 就 means "as early as", it can be seen as a habitual act if stated as "每天六点钟就到了茶楼" - "arrives at the tea house as early as six o'clock every day".
"他等了一会儿,她们都来了" - "he waited for a while; they had all came".
不少广东老人的每一天都从茶楼开始。他们都起得很早,五点就出来散步,锻炼身体,六点钟就到了茶楼。那儿老人很多,他们跟认识的人问好。 王先生每天都来。 他喜欢喝茶,
等女儿和小孙女儿来。他等了一会儿,她们都来了。
The first part describes the habit of typical Cantonese elderly people
and using Mr. Wang who goes to the teahouse every day like others as an example.
The second part (王先生)等女儿和小孙女儿来。他等了一会儿,她们都来了。describes a unique event, independent of the first part. 了 in this part serves its usual function as an aspect marker that indicates the action is completed (he has waited; they have arrived)
It is better to add 這天 (this day) at the beginning of the second part and write 這天(王先生)(在茶楼)等女儿和小孙女儿来。他等了一会儿,她们都来了。
Without the time reference, people might treat the second part as the extension of the first part and think it meant 他每天都等女儿和小孙女儿来, then it would become disconnected with the last two sentences, which clearly not describing something that happens every day
|
STACK_EXCHANGE
|
Need info on creating a tile grid data structure in Java
I'm trying to create a game with a fixed size map (2D Tile array).
I suppose I can inherit other tiles from this base Tile, e.g.: a BankTile.
Placing and removing tiles is all too easy, but forming rooms keeps me puzzled.
Rooms can only be formed by adjacent tiles (4-way: above, below, left and right)
This way, 2 tiles can form a 'room', e.g. a Bank with a maximum amount of resources it can hold (sum of both its tiles).
I am wondering on how I should be implementing this, if there are any existing (and optimal) solutions to this.
Some things I considered:
Tiles must keep a reference to the Room they are in AND rooms must keep a vector of all tiles they hold. So adding tiles and removing tiles becomes pretty efficient.
Only obstacle this poses is the splitting of rooms: if 1 tile connects 2 rooms and this tile is removed, it should split the room and creating a new room for the split-off part.
Rooms will be 2D vectors with a begin-position (the room will be a surrounding square of the actual room polygon)
Rooms are 4-way linked lists of tiles
?
What needs to be done with rooms:
tiles need to be added and removed
allowing rooms to merge and split
calculating size of a room
finding a room quickly on a map
This image clarifies what I need:
Should I pick data structure 1? Rooms are 2D arrays with null-pointers for not-room-tiles.
Or should I pick data structure 2? Rooms are 4-way linked lists of tiles.
Or should I think of something else, e.g. vector of tiles = room?
Given the operations I need to be able to perform on them, which is best?
Just realized by looking at your image, it looks like it's more like " Data structure 1"
Edit:
I think you're looking for something like this then:
//In Your Game manager or whatever
public setRoom(Room r, startX, startY){
Tile[][] tiles = r.getTiles();
int sizeY = tiles.size();
int sizeX = tiles[0].size();
for(int y=0; y < sizeY; y++)
for(int x=0; x < sizeX; x++)
//Adds the tile, for any empty tile you need additional checking
worldGrid.setTile(tiles[y][0],startX+x,startY+y);
}
//In Your worldGrid or whatever
private Tile[][] grid;
public function setTile(Tile t, x, y){
grid[y][x] = t;
}
Same can be applied for removing a room. Just make sure you don't overlap rooms. Unless you want that ( empty tiles should be able to overlap without problem). But it needs additional checking so keep that in mind.
In this scenario logic is applied by reading the tiles from worldgrid, whether it's passable, non-passable, other gamelogic etc. The Rooms simply hold the tiles which make up for the room. These tiles are translated to the grid:
You can keep track of the room by adding a member in your Tile:
Public class Tile(){
{
private int roomId= -1;
public Tile(){
}
public int getRoomdId() {return roomId}
}
So if you need any information on the Room you could just do currentTile.getRoomId(). For example this id could serve as an index position of your Room in an Array. A Room manager for instance that holds all the room objects.
Of course you could make it more complex by making managers for each aspect and keep track of what Tile belongs to what room. It's up to you.
Players position in the 2D array is as follows
int indexX = Math.floor(player.x / tileSize);
int indexY = Math.floor(player.y / tileSize);
This will give us the indices of the current tile in our worldgrid. --> grid[indexY][indexX]
As for merging rooms have a manager that couples Room objects together. If you go by my example you should be able to retrieve the Room object by Tile. You can then "ask" the RoomManager what room is merged with the current one and read its properties like: size, type etc.
Or should I pick data structure 2? Rooms are 4-way linked lists of tiles
If you go with a "world grid" as in my example you don't need to. Simply request the surrounding tiles and see if they are passable once the player overlaps with a tile. Unless you move from tile to tile, as in snapping to another tile, then it would be as easy as checking player input. When the player hits the down arrow key, you simply test if the tile below the current one; which is [y+1][x]; is passable. If not, nothing happens.
I created a little image to show you what I mean (check OP). Thanks for your input.
So in that case the rooms are pretty much predefined. So what you could do is check where you place your room on the grid. Read out the tiles and merge them on the game grid. Keep track of which tile belongs to what room. So when the player is on a specific tile you can request the room. Your 2D arrays are fine. You just need to translate it to your "world" 2D array and manage which rooms occupies what tile position. You could predefine 2D arays to represent as room templates. You then just read them into your world but start at a specific point in your grid.
@TheDudeAbides made an Edit. Hope that helps.
I see, many thanks! Would it make much difference if I put the Room object in Tile? Create 1 room and "set" the Tiles to it (to access the room directly?). Because if I make 1 Room object and set it to the different tiles, would it use more memory than an integer id?
It's a design choice and ultimately up to you. You can reference the Room object in your tile. It doesn't take up a lot of memory since you're pointing to an existing object.
|
STACK_EXCHANGE
|
I love space, I love how mysterious and dangerous it is and to be able to fly around in a game like Helium Rain [Steam] is fantastic. I decided to have a chat with the developer and they’re very positive about Linux gaming.
We’ve covered Helium Rain here a few times before, so hopefully some of you will be familiar with it. Without further rambling, let's begin!
First of all, can you introduce yourself and Helium Rain?
“Hello ! I'm Gwennaël Arbona from the indie developer Deimos Games. We've recently released our first title, Helium Rain, in Early Access on Steam. Helium Rain is a spaceflight and empire-building game where you play as the owner of a trading company, in a star system far away from us. Technology is limited in this universe, and different companies fight for control.”
What makes Helium Rain different to other space sims?
“Helium Rain has a focus on three important elements : realistic handling of ships, long travel times, and a dynamic universe. Ships have lots of inertia, they move in three dimensions, can turn independently from their velocity, and don't have a maximum speed. Fleets can take days to reach a destination, with a travel system that is essentially turn-based and requires you to plan ahead when you manage military fleets. And attacking a cargo ship, destroying resources or blockading an area have meaningful consequences on the world's simulated economy.”
Since travel time can take a while, how do you plan to keep players from getting bored?
“Right now, travel time is handled as a turned-based mechanic. We don't want players to wait for hours behind a screen, we just want them to move their fleets carefully, as they won't be able to come back at once if a threat appears. In the future, I'd like to make travelling a more seamless experience, but that's not something we know how we'll handle yet.”
What games inspired you to make Helium Rain and why?
“Our list of inspirations would be a very long one! I would describe the game as a collision of spaceflight and strategy game, with space sim influences like the X series or Kerbal Space program, and games like Total War or Mount & Blade. Books, movies and TV shows are also a great source of ideas for me.“
Are you sticking with singleplayer, or is multiplayer on the roadmap too?
“Helium Rain is decidedly singleplayer only. This is an important decision on the design that enables much more complex simulations and a lot more player agency. We feel that many multiplayer space sims already exist, or are being developed right now, but not so many games offer a great empire-building experience. Our goal is to let the player have fun and experiment on the game. You can take the world over if you want to, which isn't something we could offer in a multiplayer game.”
For Helium Rain, you went with Unreal Engine 4. We've heard many mixed reports about UE4 and it's support for Linux, how has it gone for you?
“The engine does a good job at working on different platforms, but we still need to test carefully and look for stable engine releases that don't have breaking issues. I'd like to thanks the volunteers who send pull requests with every version to make sure Linux is still well-supported!
For developers, the most annoying part of getting UE4 to run is the requirement to build the engine yourself first. It is an easy process, but a time-consuming one. Once the engine runs, differences with Windows are very limited, except for minor rendering differences and different compilers. We had only a handful of Linux-specific issues during development. On the client side, things depend a lot on your hardware and Linux distribution of choice. Our top Linux priority right now is AMD support, which isn't working well. We also had issues with some Linux distributions in the past, when the engine moved to a newer LLVM release that only some distributions were providing.
Overall, the environment for releasing Linux games is much better today than it was five years ago. Most game engines support Linux, the biggest game marketplaces support Linux, and the arrival of Vulkan should help with the video drivers that can still be an issue today.”
In terms of sales, how has the Linux version sold against Windows & Mac?
“We've been pleasantly surprised by the Linux sales! Estimates of the Linux market share are conflicting at best, and while we knew Linux gamers were supportive of games that were made available to them, we didn't think we would sell many Linux copies. But we actually sold 11% of all copies on Linux, a number that's been stable since we launched. For us, it's a confirmation that Linux gaming is alive and well, with highly supportive people buying games.
We didn't work on a Mac release, since Mac OS is officially available only on computers that wouldn't meet our required system specifications.”
11% sales from Linux sounds like a lot, that's well above what the Steam Hardware Survey shows for the Linux market share, any thoughts on that?
“Our take is that Linux gamers are much more enthusiastic about games on their platform, and that our game has a lot of deep mechanics to master, which probably attracts the same kind of people who love the idea of a free operating system. We also released the source code [GitHub] for the game, which might be another reason why Linux gamers get more interested than Windows gamers. I think everyone has a different reason.“
Your game is currently in Early Access, how do you plan to keep the community involved as the game progresses?
“We're very happy with how Early Access is going on right now. We receive a lot of feedback, sometimes so much that it's hard to process! We entered Early Access with a list of upcoming game features, but we didn't know which the player wanted first, or if they wanted them at all. As a result, one of our biggest updates yet was built on player suggestions, and our next update is also an answer to something that was requested a lot. Our policy is that everything a player suggests can be a great idea for the game, if we can pull it off. Multiplayer is an example of request that we can't fulfill because it breaks too many assumptions and requires too much work, but our next update brings a quick-fight mode that is a fun addition, somewhat easy to add, and useful for balancing the game.
We still have some of our initial feature list left to implement, such as a storyline, and we expect to keep working on the game for some time before we can release it.”
With all the changes happening on Steam, from Greenlight to Steam Direct, how have you found the experience? It must be harder with so many more titles arriving on Steam now?
“We found Steam to be a very crowded market, with a hundred new titles launching every week. We passed Greenlight a few years ago, and could observe the transition to the relaxed Steam Direct process. Our experience is that Steam brings little visibility to new games, and you need an active campaign on social media to get people to notice your game. Word of mouth is very important for us, which is why we track review and comments so that we know what the players are annoyed with.“
What’s your testing procedure for the Linux version? Do you test on the open source AMD GPU drivers, or mainly NVIDIA?
“At this moment, we only test on NVIDIA hardware, based on what we currently own. When the game moves to a final release, we'll try to have more testing on AMD hardware ; but hopefully by this point we will have our AMD issues resolved.”
For other developers currently working with Unreal Engine 4 who are looking to do a Linux version of their game, any words of advice?
“An important piece of advice would be to get in touch with other UE4 developers, either on forums or the UE4 Discord channel. Most of the work on porting and releasing the game is very simple, but you'll often need pointers on specific issues and workarounds. It's also important to be cautious and test your game well before releasing, something that is always important but even more so on Linux, where the software environment can be very different from one machine to another.
And of course, the most important advice would be to actually release your game on Linux. Do it!”
I would like to thank Gwennaël for taking the time to have a chat with me, it’s always a pleasure to be able to do interviews like this, especially for a rather exciting game.
|
OPCFW_CODE
|
I have noticed anger towards Microsoft lately for "doing OSS wrong". In this post I will explain the following points:
- Microsoft is fine staying on their current course.
- As developers in the OSS ecosystem, we also have the right to do what we want.
- Most of our complaints don't impact the ecosystem negatively or positively.
During college, I had an opportunity to read Richard Dawkin's The Selfish Gene. The book attempts to explain evolution through anthropomorphizing genes. The general idea is as follows:
In describing genes as being "selfish", the author does not intend...to imply that they are driven by any motives or will, but merely that their effects can be metaphorically and pedagogically described as if they were. The contention is that the genes that are passed on are the ones whose evolutionary consequences serve their own implicit interest in being replicated, not necessarily those of the organism.
The OSS Organism
Let us pretend for a moment that the .NET OSS ecosystem is an organism. Inside that organism we have multiple genes operating. The organisms in this ecosystem consist of developers, projects, and companies. In Richard Dawkin's model, every gene is operating "selfishly" or in the direct interest of propagating itself, and it doesn't matter if any action taken helps or hurts that ecosystem.
The claims of Microsoft stifling innovation, hurting OSS, and driving away developers may or may not be true. I would personally like to think it isn't the case, and I haven't seen any evidence to support those accusations. For the sake of argument, let's assume that Microsoft is doing what the critics claim. In the selfish gene model, Microsoft would be behaving in what it thought was its best interest of propagating itself. There is no right or wrong in a model where a gene is operating in what it thinks is its interest.
From my point of view, I can't get upset at Microsoft. I may not always agree with their choices, but my emotions rarely get stoked like some of the tweets I've seen come from current and ex .net developers.
One of the more interesting parts of Dawkin's theory is the idea of altruism. The idea that a gene can act against its own self interest, in the case that it will help other similar genes. This concept applies to two kinds of groups inside the .NET OSS ecosystem: projects that look like Microsoft, and projects that don't.
The projects that look like Microsoft will receive help. They will be supported in ways that other projects will not be supported. The projects that don't look like Microsoft will most likely be ignored, because nothing is more effective in killing OSS than obscurity.
These examples might suggest that there is a power-struggle between genes and their interactor. In fact, the claim is that there isn't much of a struggle because the genes usually win without a fight. However, the claim is made, if the organism becomes intelligent enough to understand its own interests, as distinct from those of its genes, there can be true conflict.
Inside of our OSS organism the Microsoft gene is really powerful at propagating itself and other similar genes. The only time we are going to get a different .NET OSS ecosystem is when the system outgrows the Microsoft gene itself. That means the organism would have to realize what Microsoft is trying to do and act against it. In the case of Microsoft and OSS, this is not likely going to happen.
If you think about OSS as an organism with genes comprised by people, projects, and companies with Microsoft being the biggest baddest gene, then it is hard to get angry. Each gene is acting in what it perceives as its own best interest, and it isn't making any decisions based on morality. I believe all .NET developers, including Microsoft, are acting in their own interest and any claims to the contrary are questionable.
What we can do as genes in the OSS organism is help like genes succeed and replicate. It isn't going to be easy, but over time we should be able to influence the organism if we keep at it. The idea is not to get angry or happy about the state of the whole, but to focus on what you are doing and see where evolution takes us.
|
OPCFW_CODE
|
So to anyone other than me looking at this thread, I've got an interest check
so I have two ideas for forumfics/FWRPs/rps that don't involve WoF, I may do an interest check for WoF ones after this
Idea A: Unnamed, probably not suited to be a rp, but may work if other aspects are added. Also inspired by the book Uglies that I'm currently reading
Basically in this world, people can unlock their full potential. Before you turn sixteen, your a Boring. Powerless, Wingless, Tailless, normal, Boring. When you turn sixteen, it's safe for you to drink a mixture that triggers you full potential to develop. After you drink the mixture, your potential develops and you are a Metamorph. You will be a Metamorph for an undetermined amount of time (at most, 2 years, but that length is rare). After you're done being a metamorph, that means your potential is fully developed, and has reached its highest point. You are now a Complete. The world itself is still being developed, but that's the main idea.
Idea B: Elementals, credit goes to my Roblox friends who are not on the forums
So basically in this world (Ethix) there are elementals and everyone is an elemental. An elemental can control an element, of which there are tons of. Some examples of elementals are galaxy ones, magic ones (not quite as OP as they sound), fire, water, you name it (well, not literally, but your element suggestion may be taken into consideration). Now, once a galaxy elemental named Hiane drank this option she made. The potion gave her the power to create a dimension thing (all this seemingly really OP stuff will be explained if this becomes something) and so Hiane wanted to make a paradise to escape to, but she ended up making the nightmare realm cause her bad emotions took over
So the nightmare realm is how elementals get corrupted. If the elemental is going through something rough, they can get to the nightmare realm in their sleep (which actually isn't a nightmarish place, it's tranquil and quiet and quite nice if you ignore the fact it can corrupt you) and then in the nightmare realm their bad emotions can take over and corrupt them. So those are called dark elementals. Stay with me, there's more.
There are animals in this place, and then there are also animals who can control an element. These animals are called monsters. Why monsters? I don't know, I'm not the one who suggested the term.
And then there are hybrids. These are elementals, dark elementals whatever that can control multiple elements. They usually have something odd about them, called a deformity. Like wings. And monsters can be hybrids too. They also get a deformity. And then there can also be (though rarer than hybrids) normal elementals that can transform into an animal/monster.
Right. So that's that. Also Idea B was used as a forumfic once before, I got through like seven chapters but didn't have enough characters entered to keep it a forumfic and keep the plot going.
|
OPCFW_CODE
|
The AlertDialog component in Chakra UI necessitates the use of the leastDestructiveRef property. This means that when using the AlertDialog component, you need to supply a reference to the least potentially harmful element in the dialog box.
In the AlertDialog's setting, a "destructive action" is any action that could have notable repercussions or irreversible results, such as the removal of data. It's crucial to focus initially on the least harmful element in the dialog to avoid users accidentally confirming a harmful action.
When you supply the leastDestructiveRef property with a reference to the least potentially harmful element, the AlertDialogcomponent ensures that focus is automatically given to this element when the dialog box opens. This makes the dialog box easier and safer to navigate and interact with, diminishing the chances of users unintentionally taking destructive actions.
The need to provide the leastDestructiveRef property aligns with the guidelines given by the Web Accessibility Initiative - Accessible Rich Internet Applications (WAI-ARIA), which aim to facilitate the creation of web content that's accessible to all users. Adherence to these guidelines ensures that dialog boxes are both usable and accessible to all users, including those who depend on assistive technology or keyboard navigation.
There are 7 different Alert Dialog related components:
AlertDialog: This is the parent component that holds all the other elements of the alert dialog. When you use it, you must provide a leastDestructiveRef prop. This should be a reference to an element which, when focused or clicked, performs a non-destructive action. This follows WAI-ARIA guidelines, helping to prevent users from accidentally confirming destructive actions.
AlertDialogHeader: This is used to display the title or main heading of the alert dialog. This component is optional, but recommended for providing context to the user about the alert dialog's content or purpose.
AlertDialogBody: This serves to display the main message or content of the alert dialog. This is often a brief description or question, guiding the user's response to the dialog.
AlertDialogContent: This serves as a wrapper for the AlertDialogHeader, AlertDialogBody, and AlertDialogFooter components. It helps in structuring the dialog box and can also be styled to fit your UI design requirements.
AlertDialogFooter: This is used for displaying actionable elements, such as buttons, for the user to interact with. For example, "Confirm" and "Cancel" buttons would reside here.
AlertDialogOverlay: This is a full-screen overlay that gets rendered behind the AlertDialog. This overlay aids in focusing the user's attention on the dialog by obscuring the rest of the user interface.
AlertDialogCloseButton: This is a specific component for adding a close button to the AlertDialog. This is typically placed in the top-right corner of the AlertDialog and allows users to dismiss the dialog without performing any action.
These components can be imported as follows:
It renders a button labeled "Delete Customer" and opens an alert dialog when clicked. The alert dialog prompts the user to confirm the deletion action with the message "Are you sure? You can't undo this action afterwards." It provides two buttons for the user to choose from: "Cancel" and "Delete". The useDisclosure hook is used to control the open and close state of the dialog, and the useRef hook is used to reference the cancel button within the dialog.
Modifying the Animated Transition:
However, you can customize the transition animation by using the motionPreset prop. By setting the value of motionPreset to "slideInBottom", "slideInRight", or "scale", you can change the transition effect of the modal.
For example, if you set motionPreset="slideInRight", the modal will slide in from the right side of the screen when opening. Similarly, motionPreset="slideInBottom" will make the modal slide in from the bottom of the screen. If you want to retain the default scaling transition, you can simply omit the motionPreset prop.
Did you know?
Creative Idea No. 1
Creative Idea No. 2
Creative Idea No. 3
|
OPCFW_CODE
|
Andromeda paradox and quantum mechanics
Roger Penrose introduced the Andromeda Paradox as a thought experiment that delves into the implications of relativity and quantum mechanics on our understanding of simultaneity and reality. The scenario involves two observers walking past each other, potentially experiencing different "present moments" due to relativistic effects.
https://en.wikipedia.org/wiki/Rietdijk%E2%80%93Putnam_argument
Let's now take this a step further by introducing a "quantum version" of the paradox.
In this quantum scenario, an observer walking toward Andromeda might measure a particle (which is on Andromeda) in a definite state (e.g., spin up), while an observer walking away might observe the same particle in a superposition of states until he makes his measurement.
Considering the time elapsed between the first measurement by the first observer and the second observer's measurement, many years have passed on Andromeda. Does this imply that the measurement otucome of the second observer was already predetermined during this time interval? How do we reconcile the determinism implied by the following paradox with quantum mechanics?
I guess that, like in the normal Andromeda Paradox, both observers are on earth. Is the measured particle on Andromeda?
Yes, i forgot to mention it. Thank you for pointing it out.
oh boy. now some holliwood director reads this and go on to make "The Andromeda Paradox" movie...
I dont see the quantum version adding anything to the paradox. I mean how do you reconcile determinism in the original one.
On the one hand this whole area is quite subtle so to understand how modern physics deals with it you need quite a lot of learning (university level and beyond). Having said that, modern physics does offer a perfectly coherent account of all such phenomena so they do not present paradoxes or puzzles in that sense.
The chief thing I would offer to someone asking who has not learned quantum measurement theory or quantum field theory is that you have to beware of the loose usage associated with terms such as "measurement" and "observe" in popular accounts of science. Rather than those terms it is better, in the first instance, to speak about interactions and entanglement. For an example, one physical system (called by us a measuring device) interacts with another physical system (called by us a particle) and the two become entangled. What happens next depends on the individual case. It might be that the two subsequently become disentangled, or it might be that instead the larger one then interacts with further systems, spreading the entanglement out into many degrees of freedom, such that the disentanglement never occurs. In the latter case one finds that all the predictions of quantum physics are just as if a collapse of the wavefunction had occurred, but it is not necessary to pick any particular moment when the collapse takes place.
In hopes of clarifying a little, consider a sum such as
$$
f = a + e^{i \theta} b
$$
where $i^2 = -1$ and to follow this you will need either to know about complex numbers or just take it on trust. Suppose we wish to know the value of $|f|^2$. That's easy, it is
$$
|f|^2 = |a|^2 + |b|^2 + \left(a b^* e^{-i\theta} + a^* b e^{i\theta}\right)
$$
Now consider what happens when this $|f|^2$ is in fact a probability, as opposed to something like a length or a mass or a time. How do you measure a probability? You can't! Not in one go, at least. Rather you have to try some method such as run an experiment many times and take the average. But now the answers will depend on what happens with $\theta$. If $\theta$ always has the same value (and so do $a$ and $b$) then the average of $|f|^2$ is
$$
\langle|f|^2\rangle = |a|^2 + |b|^2 + \left(a b^* e^{-i\theta} + a^* b e^{i\theta}\right) \tag{1}
$$
but if $\theta$ varies randomly then the average of $|f|^2$ is
$$
\langle|f|^2\rangle = |a|^2 + |b|^2. \tag{2}
$$
The difference between (1) and (2) is what people are talking about when they
discuss things like 'collapse of the wavefunction'. The reason is that any physical description leading to case (2) can here be replaced by a different physical description making the same prediction. The different description is that $|a|^2$ is the probability for one case and $|b|^2$ is the probability for another, and all we are doing is adding those probabilities.
When two different physical descriptions lead to the same physical predictions then you get debates about which physical description is the more elegant, and the debate cannot be resolved by experimental test. This does not mean the debate is without value, because it concerns things like elegance and coherence of ideas and these are important in science.
Marco Fabbri,
The only way to explain the EPR experiment in a local, relativistic way is to introduce deterministic hidden variables. This means that both measurements need to have predetermined results.
If you are not familiar with this argument I'll describe it below.
You have a pair of spin-entangled particles which are sent to two distant locations, A and B. At both locations you are detectors fixed on the same axis, say Z.
When A measures its particle (say it's spin-UP on Z) the spin of particle B has to be spin-Down, this is what QM says. You have now two options:
You assume locality.
This implies that the measurement at A cannot change the B particle in any way. Since after the A measurement b is in a spin-Down on Z state, and that state did not change, it logically follows that, even before the A measurement, B particle was in a spin-Down on Z state.
Once we established that the Z-spin of B particle was predetermined it also follows that the Z spin of A particle was predetermined so that the perfect anticorrelation is preserved.
You reject locality.
In this case you might say that the result of the "first" measurement is random, and then the spin of B is instantly collapsed to the opposite value. Such a scenario is not compatible with relativity and you need to introduce an absolute frame of reference to decide which measurement is the first one.
To answer your question, there is no incompatibility/paradox between QM and determinism, on the contrary, determinism is the only way to make sense of QM in a relativistic setup.
It is also the case that observers always agree. The observer walking away in your example does not actually observe anything. He cannot observe the particle in a superposition, he may just assume that because he has no information about the particle. So, one observer has better information than the other, it's not really a paradox.
Both forks you presented (1 and 2) are incorrect. The 1 commentary ignores Bell. Bell shows clearly that locally predetermined outcomes are incompatible with the correct predictions of QM. The 2 commentary is wrong because there is no relevant order in measuring entanglement. The predictions of QM do not specify a causal direction. You are free to assume one, but relativity has nothing to do with the quantum version of this.
@DrChinese, Those forks are the only logically consistent positions you can have in the light of the argument presented here. The argument was not about Bell, but even in that case (1) survives in the form of superdeterminism. The logical principle of the excluded middle implies that the (1) and (2) options are the only possible ones (either locality is true or it is not). But if you go with 2 you need to introduce an order so that you can distinguish between cause and effect, otherwise the theory is incomplete.
Whether you call QM incomplete or not is, today, is more a definitional or philosophical question. You are living in the EPR of 1935. I would not agree with your characterization of there being only 2 forks, as the experimental proofs of quantum nonlocality demonstrate something that doesn’t quite fit into either of your buckets. At any rate, dismissal of Bell undermines almost everything you wrote as an answer. Mentioning superdeterminism is flawed because there is no such theory or interpretation at this time. And discussion of it belongs in a separate question, not in an answer to the OP.
|
STACK_EXCHANGE
|
/* Output Verification Module
This module verifies that the output results for water and PCM conform
to the law of conservation of energy, and throws warnings if either
does not.
Authors: Thulasi Jegatheesan, Spencer Smith, Ned Nedialkov, and Brooks
MacLachlan
Date Last Revised: June 24, 2016
*/
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "parameters.h"
#include "verify_output.h"
int verify_output(double time[], double tempW[], double tempP[], double eW[], double eP[], struct parameters params, int sizeOfResults){
/* Using malloc() here to increase max array size. Should work as long as tstep is 1.0 or greater.
If these arrays are initialized simply by "double eCoil[sizeOfResults-1]", for example, the program will
require that tstep be at least 1.3 */
double *deltaTime, *eCoil, *ePCM, *eWater;
deltaTime = (double *) malloc((sizeOfResults-1) * sizeof(double));
eCoil = (double *) malloc((sizeOfResults-1) * sizeof(double));
ePCM = (double *) malloc((sizeOfResults-1) * sizeof(double));
eWater = (double *) malloc((sizeOfResults-1) * sizeof(double));
int i;
for(i = 0; i < sizeOfResults-1; i++){
deltaTime[i] = time[i+1] - time[i];
eCoil[i] = params.hc * params.Ac * deltaTime[i] * (params.Tc - tempW[i+1] + params.Tc - tempW[i]) / 2;
ePCM[i] = params.hp * params.Ap * deltaTime[i] * (tempW[i+1] - tempP[i+1] + tempW[i] - tempP[i]) / 2;
eWater[i] = eCoil[i] - ePCM[i];
}
double eWaterTotal = 0;
double ePCMTotal = 0;
int j;
for(j = 0; j < sizeOfResults-1; j++){
eWaterTotal += eWater[j];
ePCMTotal += ePCM[j];
}
double errorWater, errorPCM;
errorWater = fabs(eWaterTotal - eW[sizeOfResults-1]) / eW[sizeOfResults-1] * 100;
errorPCM = fabs(ePCMTotal - eP[sizeOfResults-1]) / eP[sizeOfResults-1] * 100;
int warnings = 0;
if(errorWater > params.ConsTol){
printf("Warning: There is greater than %f%% relative error between the energy in the water output and the expected output based on the law of conservation of energy.\n", params.ConsTol);
warnings += 1;
}
if(errorPCM > params.ConsTol){
printf("Warning: There is greater than %f%% relative error between the energy in the PCM output and the expected output based on the law of conservation of energy.\n", params.ConsTol);
warnings += 2;
}
return warnings;
}
|
STACK_EDU
|