Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
OLED display using dspic33f i2c
I am trying to use OLED display in a dspic33f microchip. I've successfully initialized and sent data using I2C to OLED. I managed to display a small font(8pt) on the OLED. However, when I tried a bigger font, part of the font is being obstructed(can't show the whole part of that font).
The code is as the following.
static unsigned char buffer[1024 ] = {
// @18 '2' (5 pixels wide) //generated by using The Dot Factory
0xC0, 0x80, // ## #
0xA0, 0x80, // # # #
0x90, 0x80, // # # #
0x8D, 0x80, // # ## ##
0x87, 0x00, // # ###
// @9 '2' (4 pixels wide)
0x00,
0xC1, // ## #
0xA1, // # # #
0x91, // # # #
0x8E, // # ###
0x00
};
void Oled_init(void){
command(0xAE); //display off
command(0xD5); //display clk div
command(0x80); //ratio -- continuation of clk_div
command(0xA8); //mux ratio
command(0x3F);
command(0xD3); //display offset
command(0x0); //no offset- continuation of display_offset
command(0x40 | 0x00); //display starting line
command(0x8D); //charge pump
command(0x14); //internal vcc
command(0x20); //memory mode
command(0x00);
command(0xA1); //remap columns
command(0xC8); //remap rows
command(0xDA); //com pins
command(0x12);
command(0x81); //set contrast
command(0xCF);
command(0xDB); //set Vcomh
command(0x40);
command(0xD9); //pre charge
command(0xF1);
command(0xA4); //entire display on
command(0xA6); //normal display
command(0x2E);
command(0xAF); //display on
}
void display (void){
command(0x21); //column start address
command(0);
command(0x7F);
command(0x22); //set page address
command(0x00); //min
command(0x07); //max //128x64resolution
unsigned int i = 0;
unsigned char x = 0;
for(i = 0; i < 1023; i++){
OpenI2C2(I2CConfig2, I2C2_BRG);
IdleI2C2 ();
StartI2C2 ();
while(I2C2CONbits.SEN);
MasterWriteI2C2(0x78);
MasterWriteI2C2(0x40); //Co = 0, D/C = 1, last 6 bit = 0
for (x = 0; x < 16; x++){
MasterWriteI2C2(buffer[i]);
i++;
}
i--;
StopI2C2 ();
while(I2C2CONbits.PEN);
CloseI2C2();
}
}
This problem has been troubling me since the last few weeks and I still couldn't find any solution to it. Thanks for any solution in advance.
Is it working with pixel coordinates or character coordinates? Looks like pixel. Just shift them down.
Wait.. I think I've misunderstood how it is supposed to work..
what does it mean by shift them down? do you mean adding 0x00 in front of the buffer?
Eg, static unsigned buffer = { 0x00,0x00,0x00,0x00,...., 0xC0,0x80,0xA0, 0x80,0x90, 0x80, 0x8D, 0x80,0x87, 0x00}
Can you show the code for the small font? I think the issue is with this 16 in the inner loop.
oh... the code for small font is already in the snippet of code. In the static unsigned buffer, there is a 4 pixels wide data which is {0x00, 0xC1, 0xA1, 0x91,0x8E, 0x00} and how i did was to call the display function right after oled init in the main program.
But you have two images. Which one is produced by the presented code? Anyway show the other one as well
oh.. sorry for the confusion.. actually both are produced by the same code. just that i commented the second part of the buffer when capturing the first image and vice versa. I should've presented both all together
I think this 16 in the inner loop should be replaced by 64/<character height>...
|
STACK_EXCHANGE
|
cloning mixed Linux - Win xp on new SSD for laptop
I have a multi-boot system in my MSI Wind netbook that includes Win xp home, Ubuntu, and another Linux distro. The internal HD is 160gb, but most of it isn't used. I just bought a 60gb SSD, and would like to use it as the primary disk in my netbook. I would like to clone the Win xp and Ubuntu volumes to the SSD, which would leave me plenty of space, even if I added another distro later. With gparted, I can see that there is an initial logical volume of about 4gb (dev/sda1) with 775mb used, and by mounting and checking it I can see that it has a boot folder in it. It is formatted as ext3 (no flags). I don't understand the MBR concept very well, so I'm wondering if this is where the MBR is stored. The next volume is my Windows volume and is formatted NTFS with a boot flag. Subsequent volumes are clearly Linux volumes, including swapfile volumes. I have already formatted the SSD as NTFS, but as of yet haven't added any partitions.
(1) How do I clone the volumes I want onto the SSD? (Presumably with dd or a similar utility).
(2) Does the destination volume have to be initially set to be the same as the source volume?
(3) Do I need that first partition or would a new one automatically be created if I reinstall Ubuntu or any Debian-based distro from scratch or use a utility to fix the MBR?
If you clone, the disk being used needs to be the same as the disk being copied. Windows is not as portable as the unices.
You're better off doing a new install of XP on the disk on a pre-set size partition formatted ntfs.
When I was a kid they used to say GIGO. Garbage in garbage out. That was a LONG time ago and still holds true.
I am not a fan of cloning unless the system is exactly the same hardware and I use only base installs for images. Too many oddities can happen later or with changes. For example the linux distro might have a deal about ssd's somewhere.
Two basic tasks are either to file based copy or bit based copy.
When I do clone I tend to use g4u which is dd and have used to up-size and down-size drives.
File based you might look at any number of apps from live cd's to partimage and clonezilla and OEM software.
It doesn't take that long to load it all up.
Thanks for both of your replies. I ended up using a hybrid approach. I cloned the Windows xp volume, the small bootloader volume and the UNR 9.10 volume with dd, one at a time after first making a partition on the SSD large enough to hold each volume. The Windows and UNR had a lot of work put into them, and I didn't want to lose that. I also wanted grub, not grub2 as the bootloader, so after I verified that the Windows partition worked, I reinstalled CrunchBang 9.04 from scratch. The grub that resulted had all three OSes on the menu, and all worked.
Bootup time saving with Win xp and CrunchBang was spectacular. XP booted up in about 50 seconds on my netbook, a saving of more than a minute. CrunchBang and UNR used to take 55 sec to a minute to boot up to a working desktop (including wireless connection) on the old HD. The SSD cut that in about half for CrunchBang, and took off about 10 sec for UNR. I wonder if Jefro's point explains the bigger saving in CB, since it was installed from scratch and UNR wasn't. It won't matter - as soon as UNR 10.04 is released, I'll be replacing 9.10.
|All times are GMT -5. The time now is 06:27 PM.|
|
OPCFW_CODE
|
About this job
The Standard is looking for a passionate and driven individual with a mind that loves data to be a Senior Data Engineer on a Portland-based team poised to revolutionize one of the most critical parts of our business.
This team is responsible for transforming the digital experience of our customers, streamline our operations, and revolutionize the way we deliver software. We’re a startup inside a multi-billion dollar company with big objectives, and we’re looking for somebody who is up for a challenge.
What You’ll Do
- Bring new applications, repositories, and processes to the table that improve the quality of our data and the systems that consume it.
- Work directly with leaders and executives to understand the information that powers our operations and customer experience.
- Work with new tools, technologies, and infrastructure to deliver new software products to our customers and modernize our existing operational platforms.
- Sling code – We’ll be developing prototypes, proof-of-concepts, and highly-scalable software products upon which our customers and internal operations rely. You’ll write code and queries to power the information flow of these applications.
- Develop new and exciting analytics and intelligence that helps us understand our markets, risks, and customers in new and innovative ways.
- Be a careful steward of our customer’s data – They rely on us to keep their data safe and secure.
Skills & requirements
Who We Want
We’re after somebody who really stands out from the crowd – An engineer who lives and breathes in data. If you’re a good candidate, you will have:
- Amazing critical thinking skills with the ability to take abstract concepts and ideas, distill them down, and make them real. You want to understand not only how something works, but why it works and how it can be improved.
- A keen sense for problem solving - sniffing out issues and bugs during development and in production systems.
- Excellent communication skills with the ability to switch contexts between highly technical and business-focused topics.
- The ability to see data from a number of different elevations. You know how to model databases from their high-level entities, but you also know how to get far lower and dig into query execution plans.
- A pragmatic data mind that knows several different ways to couple applications to data, and understands the best approach to use under certain constraints.
- Lots of experience working in highly-iterative development processes. You understand the principles of them, but aren’t married to any particular methodology.
- Data modeling and development skills in multi-tier and highly scalable applications, particularly:
- Relational database modeling and design on a number of different database platforms, including Microsoft SQL Server and Oracle. You know how to design databases pragmatically, finding the “normal” in normal-form, and you know your identities from your sequences.
- A wealth of experience in the structured query language world. Experience with both T-SQL and PL-SQL, including procedures, functions, views, and triggers. You know when to use ‘em, and just as importantly, you know when not to.
- Extract-Transform-Load (ETL) development with a firm understanding of the concepts therein.
- Analytics development using the Microsoft Business Intelligence (MSBI) stack, including both SQL Server Integration Services (SSIS) as well as SQL Server Reporting Services (SSRS.)
Who We Really Want
If you can check all the boxes above and still want to stand out from the crowd, our ideal candidate will also have:
- A development background that goes beyond SQL and databases. You’ve built the applications and web services that rely on the data too. You know acronyms like REST and SOAP just as well as PK and FK.
- Synthetic data modeling and metadata management with some experience in the concepts of master data management (MDM) and enterprise information integration (EII.) You know why most MDM initiatives fail and can guide us in the right direction.
- Impressive successes in real-time data integration. You’ve taken information from a number of different source systems, glued it together, and delivered it to a consuming application in real-time.
- Built integration to cloud-based and other SaaS solutions, particularly SalesForce.com. You’ve worked with hybrid topology solutions and you’ve plugged SalesForce.com into other solutions bi-directionally.
- Developed applications and databases at Millions of rows and terabytes of data don’t scare you, they motivate you to optimize queries and seek out new ways of doing things.
- Experience in new and/or open source relational and no-SQL database technologies. You are at home with relational systems such as MySQL, MariaDB, and Postgres, no-SQL databases like CouchDB and Dynamo, and caching databases like memcache and Redis. You know what tool is right for the job.
About The Standard
The Standard is a family of companies dedicated to one core purpose: helping people achieve financial well-being and peace of mind. Founded in 1906 in Portland, Oregon, The Standard has earned a national reputation for quality products, expert resources, superior service, innovation and strong financial performance. The Standard specializes in providing group and individual disability insurance, group life, AD&D, group dental and vision insurance, group voluntary insurance, absence management services, retirement plan products and services and individual annuities. We provide insurance to 23,000 groups. More than 6 million customers nationwide count on us to keep our promises. We're committed to doing just that, now and in the future.
IT at The Standard
When you work at The Standard, you are part of a company that provides customers financial well-being and peace of mind. As a member of our IT team, you work side by side with the business, pursuing strategic opportunities for our company. It’s an exciting time with new products, new distribution channels and new customer needs that are driving big investments in technology. At The Standard, we are large enough for big opportunities and small enough for big impact.
What The Standard Offers
- An opportunity to be part of a fast-paced team with lots of exciting things in front of them.
- Be part of a team managed by engineers with backgrounds in startups and enterprise software who understand the unique demands of software product development.
- A world-class location in the heart of downtown Portland, Oregon – One of today’s most vibrant cities. The Standard’s campus is surrounded by food carts, breweries, restaurants, and an eclectic mix of local shopping.
- Bike lockers, multiple exercise rooms, locker rooms & showers, and access to almost every transit line within a block.
|
OPCFW_CODE
|
Hi Jeremy, Running the copy command from the command prompt works. So how do I avoid having to run the commands from the command line every time I regenerate the online help? Getting rid of the plus sign in the directory path doesn't solve the problem as to why the graphics aren't being automatically copied to the wrap directory.
Thanks. Martha -----Original Message----- From: Jeremy H. Griffith [mailto:jer...@omsys.com] Sent: Thursday, September 10, 2009 3:28 PM To: framers at lists.frameusers.com Cc: Martha Lee Subject: Re: Mif2Go graphics question On Thu, 10 Sep 2009 08:58:00 -0400, "Martha Lee" <martha.lee at coventor.com> wrote: >I do want to use the original graphics; as you said, generating them results >in poorer quality. The wrap path and the CopyGraphicsFrom directory are not >the same; they have the same name, but are in different locations. The wrap >path is C:\rep\documentation\trunk\MEMS+OmniHelp\MEMSplus. The CopyGraphics >directory is on the C:\rep\documentation\trunk\MEMSplus. But if I change the >wrap directory to some other name, it doesn't solve my problem. No, I wouldn't expect it to, since they are different actual locations. >Changing the StripGraphPath to Yes still doesn't get my graphics into the >wrap directory. That was so that the HTML files could see them if they were there. If the original path were preserved, they would be looking elsewhere. >This is driving me nuts. Any other suggestions? The next thing I'd do to diagnose is open a Command Prompt window, enter the commands we use by hand, and see if any problem appears: cd C:\rep\documentation\trunk\MEMS+OmniHelp\MEMSplus copy /Y "C:\rep\documentation\trunk\MEMSplus\*.jpg" copy /Y "C:\rep\documentation\trunk\MEMSplus\*.gif" See if the files are really copied this time. If not, the path is wrong. But I suspect they will be copied. When we do the copy, we add one more part, the destination path, to each copy: C:\rep\documentation\trunk\MEMS+OmniHelp\MEMSplus Why would that be a problem? Because "+" has a meaning to the copy command, and will make the whole line mean something else. That is why you should NEVER, NEVER, NEVER use any characters in a file or path name other than letters and digits... HTH! -- Jeremy H. Griffith, at Omni Systems Inc. <jeremy at omsys.com> http://www.omsys.com/
|
OPCFW_CODE
|
using System;
using FFImageLoading.Work;
namespace FFImageLoading.Transformations
{
public class RotateTransformation : TransformationBase
{
public double Degrees
{
get;
set;
}
public bool CCW
{
get;
set;
}
public bool Resize
{
get;
set;
}
public override string Key => $"RotateTransformation,degrees={Degrees},ccw={CCW},resize={Resize}";
public RotateTransformation()
: this(30.0)
{
}
public RotateTransformation(double degrees)
: this(degrees, ccw: false, resize: false)
{
}
public RotateTransformation(double degrees, bool ccw)
: this(degrees, ccw, resize: false)
{
}
public RotateTransformation(double degrees, bool ccw, bool resize)
{
Degrees = degrees;
CCW = ccw;
Resize = resize;
}
protected override BitmapHolder Transform(BitmapHolder bitmapSource, string path, ImageSource source, bool isPlaceholder, string key)
{
return ToRotated(bitmapSource, Degrees, CCW, Resize);
}
public static BitmapHolder ToRotated(BitmapHolder source, double degrees, bool ccw, bool resize)
{
if (degrees == 0.0 || degrees % 360.0 == 0.0)
{
return source;
}
if (ccw)
{
degrees = 360.0 - degrees;
}
double num = -Math.PI / 180.0 * degrees;
int width = source.Width;
int height = source.Height;
int num2;
int num3;
if (!resize || degrees % 180.0 == 0.0)
{
num2 = width;
num3 = height;
}
else
{
double num4 = degrees / (180.0 / Math.PI);
num2 = (int)Math.Ceiling(Math.Abs(Math.Sin(num4) * (double)height) + Math.Abs(Math.Cos(num4) * (double)width));
num3 = (int)Math.Ceiling(Math.Abs(Math.Sin(num4) * (double)width) + Math.Abs(Math.Cos(num4) * (double)height));
}
int num5 = width / 2;
int num6 = height / 2;
int num7 = num2 / 2;
int num8 = num3 / 2;
BitmapHolder bitmapHolder = new BitmapHolder(new byte[num2 * num3 * 4], num2, num3);
int width2 = source.Width;
for (int i = 0; i < num3; i++)
{
for (int j = 0; j < num2; j++)
{
int num9 = j - num7;
int num10 = num8 - i;
double num11 = Math.Sqrt(num9 * num9 + num10 * num10);
double num12;
if (num9 == 0)
{
if (num10 == 0)
{
bitmapHolder.SetPixel(i * num2 + j, source.GetPixel(num6 * width2 + num5));
continue;
}
num12 = ((num10 >= 0) ? (Math.PI / 2.0) : 4.71238898038469);
}
else
{
num12 = Math.Atan2(num10, num9);
}
num12 -= num;
double num13 = num11 * Math.Cos(num12);
double num14 = num11 * Math.Sin(num12);
num13 += (double)num5;
num14 = (double)num6 - num14;
int num15 = (int)Math.Floor(num13);
int num16 = (int)Math.Floor(num14);
int num17 = (int)Math.Ceiling(num13);
int num18 = (int)Math.Ceiling(num14);
if (num15 >= 0 && num17 >= 0 && num15 < width && num17 < width && num16 >= 0 && num18 >= 0 && num16 < height && num18 < height)
{
double num19 = num13 - (double)num15;
double num20 = num14 - (double)num16;
ColorHolder pixel = source.GetPixel(num16 * width2 + num15);
ColorHolder pixel2 = source.GetPixel(num16 * width2 + num17);
ColorHolder pixel3 = source.GetPixel(num18 * width2 + num15);
ColorHolder pixel4 = source.GetPixel(num18 * width2 + num17);
double num21 = (1.0 - num19) * (double)(int)pixel.A + num19 * (double)(int)pixel2.A;
double num22 = (1.0 - num19) * (double)(int)pixel.R + num19 * (double)(int)pixel2.R;
double num23 = (1.0 - num19) * (double)(int)pixel.G + num19 * (double)(int)pixel2.G;
double num24 = (1.0 - num19) * (double)(int)pixel.B + num19 * (double)(int)pixel2.B;
double num25 = (1.0 - num19) * (double)(int)pixel3.A + num19 * (double)(int)pixel4.A;
double num26 = (1.0 - num19) * (double)(int)pixel3.R + num19 * (double)(int)pixel4.R;
double num27 = (1.0 - num19) * (double)(int)pixel3.G + num19 * (double)(int)pixel4.G;
double num28 = (1.0 - num19) * (double)(int)pixel3.B + num19 * (double)(int)pixel4.B;
int r = (int)Math.Round((1.0 - num20) * num22 + num20 * num26);
int g = (int)Math.Round((1.0 - num20) * num23 + num20 * num27);
int b = (int)Math.Round((1.0 - num20) * num24 + num20 * num28);
int num29 = (int)Math.Round((1.0 - num20) * num21 + num20 * num25);
int num30 = num29 + 1;
bitmapHolder.SetPixel(i * num2 + j, new ColorHolder(num29, r, g, b));
}
}
}
return bitmapHolder;
}
}
}
|
STACK_EDU
|
import os
from onering.core import errors
def ensure_dir(target_dir):
if not os.path.isdir(target_dir):
os.makedirs(target_dir)
def open_file_for_writing(folder, fname):
outfile = os.path.abspath(os.path.join(folder, fname))
outdir = os.path.dirname(outfile)
if not os.path.isdir(outdir):
os.makedirs(outdir)
return open(outfile, "w")
class DirPointer(object):
"""
A class to manage a pointer to a current directory and move around.
Normally with the os.path module there is a global "curdir" that is
shared by EVERYTHING - including import_module which means any change
to the global curdir (via os.chdir) also affects loading of modules
dynamically. We rather want to keep track of directory pointers via
multiple instances. Hence this class.
"""
def __init__(self, curdir = "."):
self._dirstack = []
self._current_directory = os.path.abspath(curdir)
@property
def curdir(self):
return self._current_directory
@curdir.setter
def curdir(self, value):
if value.startswith("./") or value.startswith("../"):
newdir = os.path.abspath(os.path.join(self._current_directory, value))
else:
newdir = os.path.abspath(value)
if not os.path.isdir(newdir):
raise errors.NotFoundException("dir", newdir)
else:
self._current_directory = newdir
def pushdir(self):
"""
Pushes the current directory onto the stack so it can be restored later with a popdir.
"""
self._dirstack.append(self._current_directory)
def popdir(self):
self._current_directory = self._dirstack.pop()
return self._current_directory
def abspath(self, path):
if path.startswith("/"):
# Absolute path
return path
elif self.curdir.endswith("/"):
return os.path.abspath(self.curdir + path)
else:
return os.path.abspath(self.curdir + "/" + path)
def isfile(self, path):
"""
Tells if the path is a file if it is a relative path.
"""
return os.path.isfile(self.abspath(path))
def isdir(self, path):
"""
Tells if the path is a directory if it is a relative path.
"""
return os.path.isdir(self.abspath(path))
|
STACK_EDU
|
Is it forensically cleared? Not intentionally**. Volatile volume content (e.g. swap content) may continue to exist in the unallocated space in the LVM thin pool until overwritten.
The same applies to the content of the snapshot volume for root (that is, changes to root volume since start, such as logging). That too can stick around in the unallocated space in the LVM thin pool until overwritten.
The same, generally, applies to disposable VMs as well (adding the additional data from the private volume to the mix as well).
This is why the Qubes developers are looking at adding ephemeral encryption (via throwaway keys) for volumes and snapshots that are not meant to be kept past the current session.
The solution for the other volume types (root-snapshots, private) is still in design stage.
** assuming a standard hard drive, the (luks encrypted) data simply persists on disk until later overwritten, but could be seen as plain text by a forensic examiner who has access to your password, dom0 luks key, login session or a decrypted image. However, there’s what I call an “opportunistic anti-forensics” feature that might clear the data for you. If you are using an SSD/storage device that supports discard/trim and at every layer [storage hardware (e.g. SSD), luks, lvm, etc.] you and/or qubes developers have configured the system to pass discards down, then the SSD hardware will receive Trim/Discard commands when an LVM volume is removed by the qubes lvm storage driver. On most hardware this will, within a very short amount of time, erase the memory cells containing the data the volume contained. This is not a cryptographic guarantee, but relatively solid.
…Would the same apply for encryption outside of the top level Luks enryption… such as mounted Veracrypt vaults? If they don’t have the Veracrypt keys, the data that might end up in swap would be encrypted with those VC keys and not viewable with the top level Luks keys? Or does unencrypted VC data in RAM potentially end up in swap?
It wasn’t the default to flow it all the way to the hardware on R4.0 a couple years back. IIRC, I had to enable it for LUKS at the time…memory is fuzzy.
However, looking at fstab in a recently installed R4.1, LUKS is configured to pass discards down. LVM is too, by default under both versions. So you should be good, if your hardware supports it.
Notably issue_discards is still = 0 in lvm.conf, but that does not apply to thin LVs, only normal LVs. Under standard install of Qubes, it’s only useful to set that to 1 if you are removing thin pools regularly (I do), as a thin pool is build inside of a normal LV object.
Hmm, if you’re mounting volumes directly in dom0, then no, I wouldn’t make a claim that the passwords, keys or plaintext from veracrypt volumes would never end up in dom0 swap. It really depends on how you are using the data in dom0 and how the memory used to store that data is flagged in the kernel.
If you were mounting the containers in domU VMs then it would be much less likely to end up in dom0 swap, since VM memory is Xen memory and can’t be swapped by dom0 kernel unless the VM sends it to dom0 (e.g. via some channel or introspection, the most critical is likely to be the display of a domU window in dom0).
|
OPCFW_CODE
|
PROXOMITRON 4.5 -- May vs. June
Operational differences between the two Naoko 4.5 releases
The Remote Proxy "Direct Connection Fallback" Feature
The May version contains an undocumented new feature related to remote proxy connections: In the event of a connection failure with a remote proxy, Proxomitron will "fall back" to a direct connection with the requested URL. (A warning is initiated when the connection to the remote proxy has failed, but there is no warning that Proxomitron is about to establish a direct connection with the site.) Due to a bug, however, this behavior is not exhibited if Proxomitron is set to rotate through multiple proxies: Should connection failure occur with one of the proxies during rotation, Proxomitron will then attempt connection with the next proxy in the list. In the event that all listed proxies suffer connection failure, Proxomitron will become caught in an "infinite loop" as it continues to attempt connection with each proxy in rotation, instead of falling back to a direct connection.
The June version of 4.5 does not contain the feature to "fall back" to a direct connection, nor does it contain the "infinite loop" proxy-rotation bug.
The Merge Bug
The May release also contains a notable Merge bug: After merging an input file, the current config name displayed by Proxomitron in its title bar is changed to the name of the input file -- afterwards, when the user saves their config using the "green disk" icon, the input file is overwritten instead of the user's config file. When Proxomitron is restarted, naturally the user is perplexed to find that merged filters and subsequent edits were apparently not saved. The user can correct the problem during the bug-occurring session by using the File dropdown menu's "Save Config File" and resaving to their current config's filename (which will also restore the display of the config's filename in the program's title bar) -- or, if Proxomitron was already closed and restarted, the user can Load (not Merge!) the overwritten input file and then use the File dropdown menu to save it to their usual config's filename. To avoid problems with the Merge bug, the user should just use the dropdown menu to save the config after each Merge, not the green disk icon.
The June version of 4.5 does not contain the Merge bug.
The May version's feature to "fall back" to a direct connection should provide the user of a remote proxy (such as a caching proxy) with a direct-accessing connection in the event of proxy failure; however, this feature is impractical for use with a remote "anonymizing" proxy since the main purpose for using one is to avoid revealing the user's true IP. The June version was primarily released to resolve this issue with anonymizing proxies, which is why the fallback feature is removed from it -- but it has also been corrected for bugs found in the May version. The June version is therefore the "final" 4.5 release, while the May version remains available for those who specifically need the fallback behavior when using a single remote proxy. The May release of Naoko 4.5 is the only version of Proxomitron having this "direct connection fallback" feature.
The accuracy of this overview of operational differences has been confirmed by Scott Lemmon, who graciously reviewed it upon request.
|
OPCFW_CODE
|
This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .
Azure Sentinel Analytical rules help Security Teams discover threats and anomalous behaviors to ensure full security coverage for your environment
After connecting our data sources to Azure Sentinel, first we enable Analytical rules. Each data source comes with built-in, out-of-the-box templates to create threat detection rules.
Analytics rules search for specific events or sets of events across your environment, alert you when certain event thresholds or conditions are reached, generate incidents for SOC to triage and investigate, and respond to threats with automated tracking and remediation processes.
Scenario: A scheduled rule failed to execute, or appears with AUTO DISABLED added to the name
It's a rare occurrence that a scheduled query rule fails to run, but it can happen. As shown in the image below, a customer had located several Scheduled Analytics Rules that had been Auto-disable in their environment.
Azure Sentinel classifies failures up front as either transient or permanent, based on the specific type of the failure and the circumstances that led to it.
A transient failure occurs due to a circumstance which is temporary and will soon return to normal, at which point the rule execution will succeed. Some examples of failures that Azure Sentinel classifies as transient are:
- A rule query takes too long to run and times out.
- Connectivity issues between data sources and Log Analytics, or between Log Analytics and Azure Sentinel.
- Any other new and unknown failure is considered transient.
In the event of a transient failure, Azure Sentinel continues trying to execute the rule again after predetermined and ever-increasing intervals, up to a point. After that, the rule will run again only at its next scheduled time. A rule will never be auto-disabled due to a transient failure.
A permanent failure occurs due to a change in the conditions that allow the rule to run, which without human intervention will not return to their former status. The following are some examples of failures that are classified as permanent:
- The target workspace (on which the rule query operated) has been deleted.
- The target table (on which the rule query operated) has been deleted.
- Azure Sentinel had been removed from the target workspace.
- A function used by the rule query is no longer valid; it has been either modified or removed.
- Permissions to one of the data sources of the rule query were changed.
- One of the data sources of the rule query was deleted or disconnected.
In the event of a predetermined number of consecutive permanent failures, of the same type and on the same rule, Azure Sentinel stops trying to execute the rule, and takes the following steps:
- Disables the rule.
- Adds the words "AUTO DISABLED" to the beginning of the rule's name.
- Adds the reason for the failure (and the disabling) to the rule's description.
It's a rare occurrence that a scheduled query rule gets auto-disabled, but it can happen. When it happens, following are the challenges for SOC to triage and investigate, and respond to threats with automated tracking and remediation processes
- Alerts/Incidents will not be generated.
- Automated threat responses (Automation Rules/Playbooks) for your rules will not be triggered.
As of today, SOC Managers/SOC Analysts check the rule list regularly for the presence of auto-disabled rules manually. When it happens, there is no easy way to determine the presence of any auto-disabled rules automatically.
There has been a need for a solution that will notify SOC Managers/SOC Analysts when a scheduled analytic rule has been auto-disabled. This blog is going to detail how to monitor Azure Sentinel Analytic rules periodically and send notification immediately to the SOC Team via email or Teams post in case of any analytic rules gets auto-disabled via this Playbook.
This section explains how to use the ARM template to deploy the playbook to get notifications when an Azure Sentinel Analytic rule gets auto-disabled.
To access the ARM template, navigate to this Playbook
- Click the Deploy to Azure/Deploy to Azure Gov Button:
- Enter values for the following parameters.
- "Azure Sentinel Workspace Name": Azure Log Analytics Workspace Name
- “Azure Sentinel Workspace Resource Group": Azure Sentinel Workspace Resource Group Name
- "Mailing List": Email Ids separated by semi colon (;)
- "Teams Id": Microsoft Teams Id
- "Channel Id": Microsoft Teams Channel Id
- Click “Review & Create”, after successful validation click on create
This section explains trigger and actions inside the workflow:
- Recurrence trigger - The Logic App is activated by a Recurrence trigger whose frequency of execution can be adjusted to your requirements.
- HTTP GET – The Logic App hits Azure Sentinel Analytical rules REST API end point to get all the rules
- For_Each – Loops through all the Analytical rules and determines if there are any rules that has enabled property set to “false” and Display Name has “Auto Disabled”
- Send Email – Send Email to mail recipients provided by the User
- Post Message – Post Message to Teams provided by the user
This section explains steps to perform after successful deployment:
1. Authorize API Connections - used to connect Logic Apps to SaaS services, such as Office 365 & Teams
2. This playbook uses Managed Identity which grants permissions by using Azure role-based access control (Azure RBAC). The managed identity is authenticated with Azure AD, so you don’t have to store any credentials in code
With this Playbook, Security teams can discover the presence of any auto-disabled rules round-the-clock. It provides near real-time visibility via email/team’s notifications. This will be handy to monitor the health of Azure Sentinel Analytical rules and avoid any interruptions in discovering threats, anomalous behaviors and remediation processes in your environment from your connected data sources/logs. Try it out, and let us know what you think!
|
OPCFW_CODE
|
April 17, 2009
School of Engineering, Coimbatore
Engineering students at Amrita’s three campuses regularly participate in inter-college fests and bring home honors and awards. In this feature, we profile the distinctions obtained by students of 2nd year ECE (Electronics and Communication Engineering) at the Coimbatore Campus.
During the second week of February, Vignesh K. and Ganapathy Raman K. participated in a technical symposium at the Coimbatore Institute of Technology. The team competed with over 50 teams, to win the first place in an event named Yours Digitally. Based on Signal Processing, the event included quiz rounds with questions on network functions, convolutions, structure of filters and functions and graphs. “For example, we were asked to state whether DFT is N/2 times the fourier series or not,” shared Vignesh. “We were able to give the correct answer, which many didn’t know,” added team-mate Ganapathy. “DFT is actually N times the fourier series.”
One week later, Karthik D. and Sai Pramod U., also 2nd year ECE students, participated in a national level technical symposium conducted by the IEEE Student Branch of PSG Institute of Technology. The team beat nearly 70 teams to emerge as the overall winner in Circuit Design. “We were asked questions on DAC, astable multivibrator, butterworth filters, etc. — some of these topics were recently covered in the college laboratory,” said Karthik. “In the final round we were asked to design circuits,” added Sai Pramod. “I guess it is the strong grounding we have received in analog and digital electronics that helped us win.”
In the first week of March, several Amrita students participated in a fest, billed as an international eco-friendly event, also at the PSG Institute of Technology. Balasubramanian R. and Vignesh K. won the second prize in a programming contest based on MATLAB, beating nearly 150 other teams. R. R. Shrihari and Rahul R. M. won the third prize in a puzzles contest. Among other things, they were asked to find the anagrams of “Eleven plus Two” and “Statue of Liberty.” The solutions, which the team was able to provide, were “twelve plus one” and “built to stay free” respectively. The one-hour contest had nearly 450 contestants.
Ganapathy Raman K. won the second prize in Mathemagic. “Nearly 100 students participated in the prelims,” he stated. “This was a written test with questions from algebra, trigonometry, geometry, probability, calculus etc. I was one of eight students selected for the finals.” In the finals, Ganapathy was able to solve 3 of the 5 questions the judges posed to him. For example, Ganapathy was asked to mathematically interpret the graph shown. “The graph starts from infinity and then alternates between 1 and 2 before converging to 1.6108. It gives the ratio of two consecutive terms in the Fibonacci series (0,1,1,2,3,5,8,13,21…) i.e. 1/0, 1/1, 2/1, 3/2, …” was his confident and correct answer. We congratulate all the participants and the winners.
|
OPCFW_CODE
|
In our recent interview with Oddworld Inhabitants' Lorne Lanning, apart from discussing the problems that capitalism and growth models bring to game development, we also chatted for a time about the early days of the first Xbox. Lanning was convinced to switch from PlayStation to Xbox, bringing Munch's Oddysee to the platform's launch, which was a major coup for the upstart Xbox team within Microsoft. As it turned out, one of the reasons that Oddworld jumped ship to Xbox was the chance that the system could have been given away.
"At the time, Xbox thought that the core market was going to be casual. They were going to be the casual gamers' machine. Now, that's why they approached us because they said 'we think you've got something that competes in that Mario space and we think Mario's the thing to kill ... We see that space. We want that audience. We love Oddworld so why don't you get on this bandwagon? And we might give the box away'," Lanning explained to GamesIndustry.biz.
"So now you're like, 'look, if you're going to give the box away, you're going to win. If you're going to win, we want to be on board'."
Obviously, things changed before Microsoft went to market with its first games console, but the conversations internally were all over the place, according to Xbox co-creator Seamus Blackley, who helped draft the Xbox proposal and assemble the design team.
"Everybody and their brother who saw the new project starting tried to come in and say it should be free, say it should be forced to run Windows after some period of time"Seamus Blackley
"In the early days of Xbox, especially before we had figured out how to get greenlit for the project as a pure game console, everybody and their brother who saw the new project starting tried to come in and say it should be free, say it should be forced to run Windows after some period of time," Blackley told us.
The idea there essentially would have been to use Xbox as a trojan horse for Windows. It's probably smart that they didn't attempt that, however. As Lanning observed to us, the entertainment industry didn't share any love for the operating system.
"You got the brand that everyone resents having to buy, how's that going to work in the entertainment industry? See, we don't need your OS in the entertainment industry. We don't need shit from you in the entertainment industry. In fact, if anything you do runs like fucking Windows, we don't want anything to do with it, right? That was a very common perception," Lanning said. "There was a lot of resistance; it was, 'Microsoft Game Studios? Fuck Microsoft!' And we went around the world defending them. We said, 'Look, this is about building better environments for developers so that you can get better games at cheaper prices and developers can stay in business longer'."
Blackley noted that a number of other ideas were pushed around at Microsoft too. Some people said Xbox should be focused on playing movies, or that all the games would have to be made by Microsoft. Some even pushed the notion that Microsoft should make a huge play and just gobble up Nintendo. "Just name it, name a bad idea and it was something we had to deal with," Blackley lamented.
In the end, those bad ideas were swept under the rug, and the Xbox team wound up producing an excellent platform that ushered in a new era of first-person shooters on console (beginning with Halo) and laid the foundation for a revolutionary online service in Xbox Live. As Lanning put it, while Microsoft was a corporate behemoth, the Xbox team was out to prove itself within those walls.
"These guys, Kevin Bachus, Seamus Blackley, Ed Fries, were in there fighting for it. So then they sold [the idea], and then selling it, they had to deliver it. So they had this, 'We need to do this!' [attitude]. They were on the line the way a small developer is. They were hungry," he remarked. "It wasn't like they had a job and they got assigned to make this work. They did it more like it was a venture and they had to prove it and they had to sell it and they had to own it and deliver on it. When you combined all those factors, you had a really interesting moment. And it was a great machine."
|
OPCFW_CODE
|
Flexget 3.1.51 RSS Error with some symbol
Expected behaviour:
Actual behaviour:
Steps to reproduce:
Step 1: ...
Config:
--- config from task: Tapochek
exec:
allow_background: true
auto_escape: true
on_exit:
for_accepted:
- echo " `date +'%Y-%m-%d %H:%M:%S'`#{{task}}#{{imdb_name}}#{{imdb_year}}#{{quality}}"
>> '/config/accepted.log'
for_entries:
- echo "{{original_title}} | {{title}}" >> '/config/entry.txt'
if:
- '''[HDrezka Studio]'' in original_title': reject
- '''[Line]'' in original_title': reject
- '''[Flarrow Films]'' in original_title': reject
imdb:
min_score: 6.5
min_votes: 5000
imdb_lookup: true
list_match:
action: accept
from:
- trakt_list:
account: _rik_
list: watchlist
strip_dates: false
type: movies
remove_on_match: false
single_match: true
manipulate:
- title:
replace:
format: ' H265'
regexp: -HEVC
- title:
replace:
format: ''
regexp: "(60 fps|iTunes|IMAX Edition|HDRezka Studio|\u0421\u0412 \u0421\u0442\
\u0443\u0434\u0438\u044F|Open Matte|\u041B\u0438\u0446\u0435\u043D\u0437\
\u0438\u044F|IMAX|AVO| \u0433\\.)"
- title:
replace:
format: ''
regexp: '\[\] '
- title:
replace:
format: \2 (\3) [\4, \5] [\7]
regexp: (.*)/(.*) \((.*)\) \[(.*), (.*)\](.*)\[(.*)\]
- title:
replace:
format: \1 (\3) - \5
regexp: "(.*) \\((.*)\\) \\[(.[0-9]*)(.[\u0410-\u042F\u0430-\u044F\\,\\ ]*)(.[a-zA-Z0-9\\\
-\\ ]*)\\] \\[(.*)\\]"
pathscrub: windows
quality: 1080p
rss: http://tapochek.net/rss2.php?f=122,324,431&uk=**********&h=168
template: tapochek_films
trakt_lookup:
account: _rik_
transmission:
host: torr
path: /downloads/plex/Films/{{ movie_name }} ({{ movie_year }})
upgrade:
target: h265
tracking: true
Log:
(click to expand)
2020-04-21 20:12:03 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.
2020-04-21 20:12:04 VERBOSE rss Tapochek Bozo error <class 'xml.sax._exceptions.SAXParseException'> while parsing feed, but entries were produced, ignoring the error.
2020-04-21 20:12:04 WARNING rss Tapochek Skipped %s RSS-entries without required information (title, link or enclosures)
2020-04-21 20:12:04 WARNING details Tapochek Task didn't produce any entries. This is likely due to a mis-configured or non-functional input.
2020-04-21 20:12:04 VERBOSE manipulate Tapochek Modified 0 entries.
2020-04-21 20:12:04 VERBOSE details Tapochek Summary - Accepted: 0 (Rejected: 0 Undecided: 0 Failed: 0)re
Additional information:
FlexGet version: 3.1.51
Python version: 3.7.5
Installation method: docker image docker image
Using daemon (yes/no): yes
OS and version: alpine:3.10
Link to crash log:
è -this symbol
Your RSS feed is likely not encoded properly and there is not much we can do about it :(
Thanks for answer, I’ll tell with admins of torrent tracker
|
GITHUB_ARCHIVE
|
Range restriction, climate variability, and human-related risks imperil lizards worldwide
Chen, Chuanwu et al. (2023), Range restriction, climate variability, and human-related risks imperil lizards worldwide, Dryad, Dataset, https://doi.org/10.5061/dryad.rjdfn2zgn
The intrinsic predictors were collected from Meiri, 2018, Skeels et al. (2020), and Caetano et al. (2022). The extrinsic factors were calculated by mapping the environmental raters to the species distribution grids based on the shapefiles of the Global Assessment of Reptile Distributions (GARD; Roll & Meiri, 2022).
1. Meiri, S. (2018). Traits of lizards of the world: Variation around a successful evolutionary design. Global ecology and biogeography, 27, 1168–1172.
2. Skeels, A., Esquerré, D., & Cardillo, M. (2020). Alternative pathways to diversity across ecologically distinct lizard radiations. Global Ecology and Biogeography, 29, 454–469.
3. Roll, U., & Meiri, S. (2022). Data from: GARD 1.7—updated global distributions for all terrestrial reptiles. Dryad Digital Repository.
4. Caetano, G. H. D. O., Chapple, D. G., Grenyer, R., Raz, T., Rosenblatt, J., Tingley, R., ... & Roll, U. (2022). Automated assessment reveals that the extinction risk of reptiles is widely underestimated across space and phylogeny. PLoS Biology, 20, e3001544.
We used R and the packages listed below to carry out the analyses. The phylogenetic linear regression model was performed using the package phylolm (https://github.com/lamho86/phylolm). The model averaging analyses were performed by using the package MuMIn (https://cran.r-project.org/package=MuMIn).
Innovation and Entrepreneurship Program of Jiangsu Province, Award: JSSCBS20210302
Priority Academic Program Development of Jiangsu Higher Education Institutions
National Natural Science Foundation of China, Award: 32001226
National Natural Science Foundation of China, Award: 31971545
National Natural Science Foundation of China, Award: 32271734
|
OPCFW_CODE
|
// C++.
#include <cassert>
// Qt.
#include <QAction>
#include <QApplication>
// Local
#include <RadeonGPUAnalyzerGUI/include/qt/rgViewManager.h>
#include <RadeonGPUAnalyzerGUI/include/rgDefinitions.h>
#include <RadeonGPUAnalyzerGUI/include/rgUtils.h>
rgViewManager::rgViewManager(QWidget* pParent) :
m_pParent(pParent),
m_focusViewIndex(-1)
{
CreateActions();
// Handler for when the focus object changes.
bool isConnected = connect(qApp, &QGuiApplication::focusObjectChanged, this, &rgViewManager::HandleFocusObjectChanged);
assert(isConnected);
// Handler for when the application is about to quit.
isConnected = connect(qApp, &QCoreApplication::aboutToQuit, this, &rgViewManager::HandleApplicationAboutToQuit);
assert(isConnected);
}
rgViewManager::~rgViewManager()
{
}
void rgViewManager::CreateActions()
{
// Focus next view action.
m_pFocusNextViewAction = new QAction(this);
m_pFocusNextViewAction->setShortcutContext(Qt::ApplicationShortcut);
m_pFocusNextViewAction->setShortcut(QKeySequence(gs_ACTION_HOTKEY_NEXT_VIEW));
m_pParent->addAction(m_pFocusNextViewAction);
bool isConnected = connect(m_pFocusNextViewAction, &QAction::triggered, this, &rgViewManager::HandleFocusNextViewAction);
assert(isConnected);
// Focus previous view action.
m_pFocusPrevViewAction = new QAction(this);
m_pFocusPrevViewAction->setShortcutContext(Qt::ApplicationShortcut);
m_pFocusPrevViewAction->setShortcut(QKeySequence(gs_ACTION_HOTKEY_PREVIOUS_VIEW));
m_pParent->addAction(m_pFocusPrevViewAction);
isConnected = connect(m_pFocusPrevViewAction, &QAction::triggered, this, &rgViewManager::HandleFocusPrevViewAction);
assert(isConnected);
}
void rgViewManager::AddView(rgViewContainer* pViewContainer, bool isActive)
{
if (isActive)
{
m_viewContainers.push_back(pViewContainer);
}
else
{
m_inactiveViewContainers.push_back(pViewContainer);
}
}
void rgViewManager::FocusNextView()
{
// Increment focus index.
int newFocusIndex = m_focusViewIndex + 1;
if (newFocusIndex >= m_viewContainers.size())
{
newFocusIndex = 0;
}
// Change the view focus index.
SetFocusedViewIndex(newFocusIndex);
// Apply the focus change.
ApplyViewFocus();
}
void rgViewManager::FocusPrevView()
{
// Decrement focus index.
int newFocusIndex = m_focusViewIndex - 1;
if (newFocusIndex < 0)
{
newFocusIndex = static_cast<int>(m_viewContainers.size()) - 1;
}
// Change the view focus index.
SetFocusedViewIndex(newFocusIndex);
// Apply the focus change.
ApplyViewFocus();
}
void rgViewManager::SetFocusedView(rgViewContainer* pViewContainer)
{
if (m_pFocusViewContainer != nullptr)
{
// Get old focused view container.
rgViewContainer* pOldFocusContainer = m_pFocusViewContainer;
// Set focus state.
pOldFocusContainer->SetFocusedState(false);
}
// Set new focused view container.
m_pFocusViewContainer = pViewContainer;
// Set focus state.
m_pFocusViewContainer->SetFocusedState(true);
}
void rgViewManager::SetFocusedViewIndex(int index)
{
if (index >= 0 && index < m_viewContainers.size())
{
// Set focus index.
m_focusViewIndex = index;
// Get container at the focus index.
rgViewContainer* newViewContainer = m_viewContainers[m_focusViewIndex];
// Set the focused view.
SetFocusedView(newViewContainer);
}
}
void rgViewManager::ClearFocusedView()
{
// Clear focused state of currently focused view container.
if (m_pFocusViewContainer != nullptr)
{
m_pFocusViewContainer->SetFocusedState(false);
}
// Invalidate focus index.
m_focusViewIndex = -1;
m_pFocusViewContainer = nullptr;
}
void rgViewManager::ApplyViewFocus()
{
// Focus in on the widget.
if (m_pFocusViewContainer != nullptr)
{
// Get widget to focus on
QWidget* pFocusWidget = m_pFocusViewContainer->GetMainWidget();
if (pFocusWidget != nullptr)
{
pFocusWidget->setFocus();
}
}
}
void rgViewManager::HandleApplicationAboutToQuit()
{
// Remove all references to existing containers so widgets aren't re-polished during shutdown.
m_viewContainers.clear();
m_inactiveViewContainers.clear();
}
void rgViewManager::HandleFocusNextViewAction()
{
FocusNextView();
}
void rgViewManager::HandleFocusPrevViewAction()
{
FocusPrevView();
}
int FindAncestorContainerIndex(const QWidget* pWidget, const std::vector<rgViewContainer*>& containerList)
{
int ret = -1;
// Search the list to find a container that is an ancestor of the given widget.
for (int i = 0; i < containerList.size(); i++)
{
rgViewContainer* pViewContainer = containerList[i];
// If the focus widget is a child of a view container, give that container view focus.
if (pViewContainer->isAncestorOf(pWidget))
{
ret = i;
break;
}
}
return ret;
}
void rgViewManager::HandleFocusObjectChanged(QObject* pObject)
{
QWidget* pWidget = qobject_cast<QWidget*>(pObject);
if (pWidget != nullptr)
{
// Find appropriate view container to switch view focus to.
int focusIndex = FindAncestorContainerIndex(pWidget, m_viewContainers);
if (focusIndex >= 0)
{
SetFocusedViewIndex(focusIndex);
}
else
{
// If no container in the main list exists, check the inactive container list.
int inactiveIndex = FindAncestorContainerIndex(pWidget, m_inactiveViewContainers);
if (inactiveIndex >= 0)
{
// Get the view from the inactive view list.
rgViewContainer* pView = m_inactiveViewContainers[inactiveIndex];
// Set the focused view.
SetFocusedView(pView);
}
else
{
ClearFocusedView();
}
}
}
else
{
ClearFocusedView();
}
}
|
STACK_EDU
|
MandrakeSoft and Pearson Education sent us over a copy of their latest “pro” edition of the popular Linux Mandrake 9.0. We already wrote a review about Linux Mandrake 9.0, so this is going to be a review of the ProSuite deal specifically and what you get for $199 USD RPP (easily found for around $175 USD in the market). Update: Apparently, StarOffice 6.0 is included in its full version with the distribution. Too bad Mandrake does such a poor job and include its RPMs along with some hundreds other demos in the two Commercial CDs, without saying a word about it (or where to find it) in the “Commercial Software Guide” booklet or another really prominent place.The all blue box includes 8 CDs: 2 with the Mandrake Linux 9 OS, 1 CD with the internationalization and documentation, 2 CDs with commercial applications (demos), 1 supplementary CD with more open source applications, 1 CD with the sources and 1 CD with the IBM DB2 database, an evaluation version that works well on Mandrake Linux. Also, you will find a DVD media which includes everything, for those with DVD drives (I haven’t tried the DVD as the machine I installed Mandrake ProSuite doesn’t have a DVD drive).
Additionally, you will find a 20-page booklet/pocket guide which is the “Quick Startup Guide”. This short booklet answers some very basic questions about Mandrake, its installation and uninstallation, hardware support and in general, it is the “first step” towards installing Mandrake Linux.
Then, you will find the main manual, named “Installation and User Guide” and it is a 190-pages illustrated book. The guide describes the installation in more detail, the KDE environment (e.g. KMenu, browser, email, printing) and then it goes into detail about how to use the Mandrake Tools. The Mandrake tools description take about half of the book. At the end of the book you will find a pretty extensive troubleshooting guide and then then Index. Unfortunately, the index while being alphabetically sorted does not have letter headers to easily identify where a letter starts or stops. Moreover the Index is not as populated as it should have been.
There is also a 32-page booklet which describes the commercial software demos to be found in the two accompanied CDs. There are about 60 commercial demos included in these two CDs (includes for example Opera, Win4Lin, TheKompany demos, jBase, Turboprint, AC3D, Stuffit and more). I don’t see the point of having that booklet there other than for promotional purposes. The great majority of these are demos freely found on the web for example.
As for the additional x86 application CD, includes RPMs most easily found on the /contrib folder on the Mandrake FTP or on rpmfind: Abiword, loads of perl scripts, Afterstep, Apache2…
The ProSuite Edition is effectively the Download Edition but with the Server installation CDs, plus a few more CDs with freely available software, a manual, and 90 days of installation web support. It includes phone support for 5 incidents valid 60 days, but again, only for the instrallation part.
And this is where exactly my problem lies. It is just not enough to only support the installation. I mean, it is not 1995 anymore and the Linux installation methods are not as arcane as they used to be. I wrote the exact same thing about our review of Red Hat 8.0 Professional boxset review a month ago. Throwing a zillion packages on some CDs (and some are untested or don’t work properly) and only support the installation, it just doesn’t cut it. I want more support for an OS that costs two hundred dollars. Driver support for example, and even application support, even if MandrakeSoft or Red Hat or SuSE are not the direct developers of these applications. From the moment they include all that software on their product and sell it, they should be supporting it. Apple, Microsoft and other Unix vendors support all or most of the included software. And the main reason they do not include other third party software is because they don’t want to have the headaches of supporting it for people who do not understand the difference of “we are not the developers of this application”. Linux has taken the approach for only support the installation procedure (possibly a relic from the ’90s taboo for “installing Linux is difficult”) and they are throwing as much as (free) software as they can in it and they expect us to pay for it. Sure, if you are a modem user it makes sense to have the CDs with all the software in it, but you won’t be realling using all of these packages anyway.
This is the “server” product from Mandrake and includes a kernel with support for 4 GB memory and other server services like DNS, NIS etc. Red Hat Pro includes these packages too and they include even more in their (more pricey) Advanced Server product (with cluster support and other exotic).
The only “interesting” reason why someone should buy the ProSuite edition is because of the MandrakeOnline security update feature which is free for the first computer and 25% off the price for a second installation (normally $55 per year). But still, putting everything together, the ProSuite is 30-40% more expensive than its direct competitors, Red Hat 8 Professional ($125-$150) and SuSE Professional (only $80). And in fact, documentation on the Red Hat 8 PRO is much better than Mandrake’s, it includes more booklets explaining things beyond how to click through the preference panels or how to load a web browser… ProSuite includes some more documentation in the CD as PDFs (as Red Hat and SuSE also does), but I can’t get it out of my mind that a real booklet would be great (I prefer something that I can touch for something I would pay for). In fact, the same PDFs are included in the Standard and PowerPak edition, so it does not give an edge to the ProSuite at all.
My conclusion would be (as with Red Hat as well) to only buy the Standard Edition if you want to support Mandrake. The only “Pro” product that makes a good purchase deal from all three main Linux distros is the SuSE one. Unless I missed something, I personally do not see the Mandrake ProSuite 9.0 as a good deal for what it gives you for two hundred bucks (when comparing it to other Mandrake products and the competition). You expect from any review to give you an idea if it is worth buying or not, so here is my conclusion: ProSuite 9.0 is expensive in comparison for what it offers. Update: I just found the StarOffice RPMs. They were “lost” in the chaos of the commercial CDs without a single word from manuals or the OS that they include such a prominent piece of software in its full version. Well, under the new findings, the price seems much better now (StarOffice 6 itself sells for $75). The Arconis OS Selector boot manager is also included in its full version (as much I as I can tell after installing it). Thing is… do you need StarOffice when OpenOffice.org can do most of the job well?
|
OPCFW_CODE
|
Lua programming language is widely used in game development. It is a lightweight and efficient scripting language that offers excellent performance, flexibility, and ease of use. In this guide, we will explore the basics of Lua programming and its applications in game development.
What is Lua Programming Language?
Lua is a scripting language that was developed in Brazil in 1993 by Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes. It is an open-source language that is easy to learn and use. Lua is a high-level language that is designed to be embedded into applications and is known for its simplicity, flexibility, and speed.
Applications of Lua Programming in Game Development
Lua programming language has gained popularity among game developers due to its ease of use and efficient performance. It is used in many popular games, including World of Warcraft, Angry Birds, and Civilization V. Lua is used to create game mechanics, user interfaces, and to script artificial intelligence in games.
How to Get Started with Lua Programming?
To get started with Lua programming, you need to install Lua on your computer. Lua is available for Windows, Linux, and Mac operating systems. You can download the latest version of Lua from the official website, lua.org.
Once you have installed Lua, you can use any text editor to write Lua code. Notepad, Sublime Text, and Atom are popular text editors that support Lua syntax highlighting. Alternatively, you can use an integrated development environment (IDE) such as ZeroBrane Studio or Eclipse with the Lua plugin.
Basic Syntax of Lua Programming Language
Lua programming language is simple and easy to learn. It has a concise syntax that makes it easy to write and understand code. Here is an example of Lua code:
— This is a comment in Lua
— Variables can be declared like this:
local x = 10
local y = 20
— You can print values using print()
print(“The value of x is “, x)
print(“The value of y is “, y)
— You can do arithmetic operations like this:
local z = x + y
print(“The value of z is “, z)
In the example code above, we declared two variables x and y and assigned them the values 10 and 20, respectively. We then printed the values of x and y using the print() function. Finally, we added x and y and assigned the result to the variable z, which we also printed.
Lua Programming Language Features
Lua programming language has many features that make it ideal for game development. Some of these features include:
Lightweight and Fast:
Lua is a lightweight language that is easy to learn and use. It is designed to be fast and efficient, making it ideal for game development.
Easy to Embed:
Lua is easy to embed into other applications, making it perfect for game development. It can be integrated with C/C++ and other programming languages.
Lua supports object-oriented programming, which makes it easy to create complex game mechanics and systems.
Lua has built-in garbage collection, which makes it easy to manage memory in games.
Lua is a cross-platform language that can run on Windows, Linux, and Mac operating systems.
Lua programming language is a powerful and efficient scripting language that is ideal for game development. It is lightweight, easy to use, and supports object-oriented programming. Lua is used in many popular games and is an essential tool for game developers. With the knowledge of the basics of Lua programming language, you can create exciting and engaging games
Read Also- Programming Languages
|
OPCFW_CODE
|
Can I hire someone for Java coding help with Android App Security Threat Intelligence Incident Response Threat Intelligence Case Studies? We have all been calling Java developers who have security issues, or DDoS attacks they are having come to a close and we all have been having that happen again when we decided to get phishing threats to go away and there’s the case: a. Google incident that included a report about the incident and not a report about the incident itself. The Google Incident made similar allegations more than two blog here ago. The incidents focused on the ability of Google to collect metadata about users, such as their search index. The attack is still ongoing after the recent reporting of a new incident on BitTorrent. b. The first mention that this Google incident got back in May. This incident didn’t include a link to one of these reports. Perhaps it was a connection to the investigation by the Firebase team. Did you get any links to the initial notes released by Google? Here is what that report says about the incident. What’s happening with the following Google images? Image of image (jpg) Used for the Google attack report (0.4.6 format) Image from Google Image Tiles showing the Google incident files. I haven’t used these images. It’s hard to explain exactly how the Google incident is related to the impact it has on the user data in the system, but it’s a plausible explanation. Maybe the incident occurred as a result of an email exchange recently, so it could have been a virus alert. The firebase teams are on the lookout to report new infringements from Google. If the firebase team is working with official source I find it very unlikely that the alleged Google incident will ever lead to more violations at Google, or to the use of the same types of information in any of their apps. There’s also a site called Incident Analysis, in which Google analytics on your Google Drive system are shown. This website even has a video overCan I hire someone for Java coding help with Android App Security Threat Intelligence Incident Response Threat Intelligence Case Studies? You just need to be sure to qualify your time.
Do My Online Math Course
You can also follow this post to learn more about security threats Intelligence Threat Intelligence Incident Response Operations and how to read this issue. I know that this post is not so much the Java security issue which is the learn this here now security issue, but the security threat. There is a great article, which talks about smartly deployed software-based applications. Security threats, in spite of the fact that Java security issue is the main attack on the security of the first mile. There is a time when security cannot be attacked. You don’t want to be successful in a smart enterprise with many switches and no place more to go. Now, this issue has been claimed as a security threat. Once again, it is becoming clear that security cannot be attacked. Both good as Java security issue and the first mile security threat. However there is a time, is it the java security issue which is a security threat. Here is the article,which you already read here. “Smartly deployed software-based applications that can be easily and securely deployed into a room, or even directly to a computer that is no longer required for any other purpose than to engage in business activities”. There is a time when security can be attacked. You don’t want to be successful in a smart enterprise with many switches and no place more to go. Keep reading. It is very important don’t wait and watch it. It is well worth to read. How you know about security threats is completely different than I see in Java security. You will never create a very effective threat model. When you see and read this article, you will face a problem of security.
Pay Someone To Do University Courses For A
Security threats you probably know how in smart security, you should know as well as I know what a security threat is there. Virtel Virtel isCan I hire someone for Java coding help with Android App Security Threat Intelligence Incident Response Threat Intelligence Case Studies? The Adobe APK application gives you the access to several of our Android Security Forensics tools. Now you can hire a competent expert to help you in your Android Security Threat Intelligence Incident Response Action Response Attack Intelligence Case Study. After submitting our questions we receive a response via Adobe APK code that contains the code for this APK. Our response form has a large number of user generated comments, which can be added to a large list of users upon adding this document. If blog make any input, the comments can be added to one or more users’ comments or will be filtered by us to ensure that your comments are relevant to your case. The ability to add comments to different user types is a powerful and flexible feature. 1. Do you need your app for a Java App Security Threat Intelligence Incident response? 2. Have the Java App security function applied(1) to your app and it should not run directly on your Java App Security function? 3. Is the help request from our tool team available for third parties? 4. Is your app available as a Java App Security Challenge for download? 5. Was your app protected by Java 9 compliant yet Android’s SDK? 6. Have the Android Security Function applied(1) to your android app? 7. Please state your intent(1) in the Android Security Toolset for Android App Security Threat Intelligence Incident Response Assessments. 8. If the purpose for this application is to protect third parties, have you added any cookies to your app(2) page? 9. Can you hear me? 10. What will your app do? 11. In Section 6.
2, please. Are you in violation of App Store rules? Do we need to share your answer if we violate any of the requirements behind App Store? 12. Will you do a fresh app 13. Have
|
OPCFW_CODE
|
Some of my players wanted to create characters with ability scores higher than 20 as 1st level PCs. I cannot do that as I am running an adventure path which is made for characters with average stats. How can I explain to them the balance of the ability score system?
Explain that the balance is intended to provide the maximum amount of entertainment. Stats too low can be amusing and an empathetic DM can always work with that. If stats are boosted above 20 to start, the unfair advantage it provides would probably make the adventure dull/boring due to lack of investment into the character. I like to use extremes in explanations (although that has its own faults) to show how bad it could be. If your first level character had God like stats, nothing is challanging so there is very little incentive to do much...unless they would like 20th level stats and fight first level enemies. For further ammunition, explain that every encounter, monster, and trap is based off of an average stat system. When you mess with the stats you mess with the balance.
Classically speaking, Dungeons and Dragons considered scores between 3 and 18 to be the bounds of human achievement. This was based on how, back in the day, we actually rolled for each of these abilities (3d6, which generates a bell curve between 3 and 18 with the top of the curve happening at 10). You couldn't have a higher STR than 18 because d6es don't have 7 pip sides. Now, it should be noted that even by AD&D there were modifications to this being made (if you got an 18 STR you also rolled a d100 and added that, so a guy with an 18/95 was stronger than a guy with 18/12) but that's how it started.
The lines have been blurred a bit but that same basic idea holds. Human PCs can't have STR higher than 18 because there are no humans that strong. An 18 STR human represents the strongest of the strong, at least among normal people. That's not saying that a human PC can't have some magical artifact that supernaturally increases their strength, or that they can't improve it over time (both old skool DnD and Pathfinder kind of hand-wave that) but they can't start with those scores because they don't get to start out as gods.
Another angle: back in the days of 1E, dwarves got if memory serves a +1 to their CON, allowing them to achieve a 19. If you were to allow a human PC to buy/roll their way to 19, you'd have diluted much of the advantage that dwarves have starting out over humans. It's not (just) that dwarves don't have to spend as many points to get there, it's that dwarves are physically hardier than humans and as such the best of the best dwarves would be uniformly hardier than the best of the best humans. Those upper bounds simply were not available to humans.
Again, this has been diluted a bit by subsequent editions but if you're looking to house rule this back into place, there is a long line of tradition and tested gameplay on your side.
It's honestly really difficult not to want at least one stat at a higher level but if you want to enforce the limit, you make them use a stat buying character creation method, and lower the normal point threshold. It can seem kind of cruel but you can tell them that if they want that 18+ stat, they're going to hurt for it. If telling them outright that this world doesn't require the same dynamic level of power, it seems the only way as much as I hate to limit players in their chargen.
Edit: Given the new information that the players are all new to the system, they may not realize what the scale is actually supposed to be and are getting lost in the ability modifiers over the ability scores themselves. They also probably don't have a firm grasp on zero = normal, not lacking. Perhaps if you explain to them some common probability charts for 3d6 and show why it's statistically rare to achieve the beloved 18 (1/216 chance with 3d6, or a 1.62% with 4d6k3), let alone the 20 if you have the right race and for any RAW starting character shouldn't be starting above this.
Completely aside my official answer above as a personal opinion, D&D might not be the right system for this sort of setting as it relies heavily (in my experience) on needing those numbers if you want characters to advance at a respectable rate.
|
OPCFW_CODE
|
Opportunity cost: The cost of an alternative that must be forgone in order to pursue a certain action. Put another way, the benefits you could have received by taking an alternative action. The slope of the PPC curve is the opportunity cost of bananas compared to rabbits.
Comparative Advantage: Is when an agent has a productive activity and has a lower opportunity cost of carrying this activity than another agent. The gains from specialisation grow larger as the difference in opportunity cost increases.
Specialisation: when a producer focuses on the production of one product that they can produce more of than another product in a certain period of time.
Supply Curve: represents the relationship between the price of a good or service and the quantity supplied (vary the price of apples to see how the supple of apples would change with it) of that good or service. Horizontally: start from a certain price and then use the supply curve to derive the quantity of goods that will be supplied at that price. Vertically: start from a given quantity, find the associated price on the supply -> the minimum amount of money the producer is willing to accept to supply the marginal unit of the good ==Producer reservation price
Production Possibility curve: represents all possible combinations of bananas and rabbits that can be produced with Alberto’s labour if he works all the available hours (if all inputs are used efficiently)
Consumption Possibility curve: represents all possible combinations of two goods that the agents in an economy can consume. (PPC vs. CPC) - international trade: two goods that the economy can feasibly consume when it is open to international trade (depends on international (world) prices)
Low hanging fruit
Principle of increasing opportunity cost: in the process of increasing the production (need to have resources available – capital/labour/technology) of any good, first employ those resources with the lowest opportunity cost and only once these are exhausted turn to resources with higher cost.
Market equilibrium: occurs when the price and the quantity sold of a given good is stable, or occurs when the equilibrium price is such that the quantity that consumers want today is the same as the quantity that suppliers want to sell.
Market: for a given good or service is the set of all the consumers and suppliers who are willing to buy and sell that good or service at a given price.
Marginal benefit: of producing a certain unit of a given good is the extra benefit accrued by producing that unit
Marginal cost: of producing a certain unit of a given good is the extra cost of producing that unit (the relevant cost is the ‘opportunity cost’ and not just the ‘absolute cost’ of producing the good.)
Cost benefit principle: states that an action should be taken if the marginal benefit is greater than the marginal cost
Economic surplus: of a certain action is the difference between the marginal benefit and the marginal cost of taking that action.
Quantity supplied: by a supplier represents the quantity of a given good or service that maximises the profit of the supplier.
Sunk cost: is a cost that once paid cannot be recovered (differentiate its cost – firm)
Fixed cost: is a cost associated with a fixed factor of production, which means the cost does not vary with the quantity produced for example rent.
Variable cost: is a cost associated with a variable factor of production, which means the cost tends to vary with the quantity produced example electricity or labour
Short run: is a period of time during which at least of one factor of production is fixed
Long run: is a period of time during which all factors of production are variable.
Profit: represents the difference between the total
|
OPCFW_CODE
|
A full page ad in the New York Times yesterday said "Who's Killed More Animals?" Under a picture of recently convicted Michael Vick, the score is "8." Under an image that's supposed to represent the animal rights organization PETA, the score is 14,400. Aha, so the infamous pro-animal organization is actually in the business of killing animals?
I would have turned the page quickly if it weren't for the fact that a student in my animal rights class last semester did a presentation comparing different animal rights groups and presented these allegations. I had never heard them before, expressed skepticism, and moved on. The group behind the allegations evidently has a lot of money and really wants to slime the folks at PETA, so yesterday I decided to look into the matter.
I should say, to begin with, that I'm basically in sympathy with the PETA people. I've met the president of the organization, Ingrid Newkirk, and she struck me as extremely decent, compassionate, and reasonable. My own perspective on animal issues is not absolutist and uncompromising, like PETA's is, but their clear, simple message is valuable. I also think they're masters at attracting attention--and Newkirk was completely forthright about that goal. Once people are looking (at naked celebrities, shocking images, or whatever) you can tell them something important. The Humane Society is more my style, but the PETA people are good guys, in my opinion.
So what about all the animal-killing that's allegedly going on at PETA? Apparently the organization doesn't want to help the "PETA kills animals" campaign gain publicity, so they don't respond on their website. But I sent them an e-mail and they sent back a thorough response. You can read it here. Bottom line--people turn to PETA as a last resort with animals that are not adoptible. PETA does euthanize animals.
I don't feel scandalized by this fact. "No kill" animal shelters have a happy image but they aren't really responsible for less killing than the rest. The "no kill" shelters take animals only by reservation. The animals they turn away wind up...of course...at the other animal shelters that do euthanize.
One of the most passionately pro-animal students who ever took my animal rights class worked at the SPCA and actually helped euthanize animals. This was painful for her, but there simply isn't enough room at animal shelters to house all the unwanted cats and dogs.
The people we should feel angry at are not the ones doing the euthanizing. It's folks who don't spay and neuter their pets. And even worse, people who adopt a cat or dog and then for the most trivial reasons decide to return it. An article in the New York Times magazine last year said that some people will actually take a dog to a shelter because he no longer seems like the right accessory. You know, last year I was an Irish Setter person, but now one of those miniature dogs would fit my image so much better.
It's people like this that scare me.
|
OPCFW_CODE
|
Enhance automated accessibility testing in our development pipeline
What
Timeline: Ongoing work, with a strong focus until March 2023, and then continuing afterwards
Priority level: 3 – what reduces risk of inaccessible implementations
Category: Design system fundamentals
Potential tasks
Research new automated accessibility testing tools and determine whether they can add benefits beyond what our jest-axe automated testing is currently achieving.
Expand automated testing to include all example code snippets for each style, component and pattern.
Research the feasibility of testing example code snippets with dynamic JavaScript and multiple states, such as the cookie banner and character count components.
Epic lead
Not yet assigned
Why
As of November 2022, there are limitations to how the team implements jest-axe as part of our automated testing:
The team only tests the first example code snippet for each component and pattern – most of our components and patterns have multiple example code snippets.
The team only tests the static HTML version of example code snippets, as found in the HTML tab – some components and patterns include JavaScript that modifies the HTML, which tests do not capture.
Who needs to work on this
Developers, accessibility specialist
Who needs to review this
Developers, tech leads
Initial goals
[x] New automated accessibility testing tools are researched and changes to our existing testing are proposed
[x] Automated testing is expanded to include all example code snippets
[x] Components with multiple states get testing for each state
Future goals
[ ] Automated testing is expanded to include multiple stages of complex interactions and journeys (for example, cookie banner, character count, Exit this Page)
[ ] Research on automated testing focussed on screen reader outputs is completed and potential options are pursued
[ ] Processes are put in place to make sure tests are updated when new components are added or changed
Note: this is a duplicate of an older placeholder epic. I've closed the previous issue: https://github.com/alphagov/govuk-design-system/issues/1937
Note: this replaces an other older issue related to tooling: https://github.com/alphagov/govuk-frontend/issues/1971
Just so you know one limitation of jest-axe is that it runs using JSDOM so checks like colour contrast checks do not work. Moving towards something where a real browser is orchestrated (which would also allow testing between interaction states) would improve the coverage of axe-core testing.
@davidc-gds @NickColley Have either of you tried @axe-core/puppeteer (from Deque Systems)?
Hey @colinrotherham I've only heard of it Puppeteer, and I'm not sure how it compares to jest-axe (or even if comparing the two would be a valid apples-to-apples comparison).
That being said, I can think of a few reasons to warrant further investigation into Puppeteer:
It's made by Deque, the people who make axe-core
The public npm is newer than jest-axe (Puppeteer was released 2 years ago, while jest-axe has been in govuk-frontend for at least 3 years?)
I think puppeteer might be related or closely tied to jest in some way? I don't know much about either, but we have a line in our Package.JSON that says 'jest-environment-puppeteer'. There's also this page about using jest with puppeteer: https://jestjs.io/docs/puppeteer
Thanks @davidc-gds it definitely looks handy
But yeah it wouldn't be like-for-like as jest-axe adds nice "reporting" for us
Just so you know one limitation of jest-axe is that it runs using JSDOM so checks like colour contrast checks do not work. Moving towards something where a real browser is orchestrated (which would also allow testing between interaction states) would improve the coverage of axe-core testing.
@NickColley I'd imagine jest-axe could use Puppeteer's Page.setContent() to render static HTML just like JSDOM? Once rendered, could pass the page object into @axe-core/puppeteer versus passing an HTML element into axe-core?
I think unless there's a really compelling way to opt into a real browser environment I'd not change jest-axe.
Instead my gut feeling it's better to pick a good orchestration suite e.g. something like https://testcafe.io/ or https://go.cypress.io/ (used by the GOV.UK Prototype Kit project) and run axe as part of that.
https://www.npmjs.com/package/@testcafe-community/axe
https://www.npmjs.com/package/cypress-axe
This way you can write easier to maintain tests that interact with components in a real browser environment and run axe between the states.
Helped out with @axe-core/puppeteer and shared some of our Puppeteer v19 fixes:
https://github.com/dequelabs/axe-core-npm/pull/682
After their next release we'll be free to add real browser colour contrast testing etc 😊
@colinrotherham has another PR in the works to cover off pretty much all 3 of the potential tasks listed in this activity.
https://github.com/alphagov/govuk-frontend/pull/3522
If the PR is successfully merged, this may represent a successful completion of this activity, at least for its initial scope.
It also looks like it will resolve this longstanding issue related to improving automated accessibility testing:
https://github.com/alphagov/govuk-frontend/issues/1971
We merged it @davidc-gds 😊
Would be good to confirm if we have any missing tests such as the Cookie Banner UI states
Hi @colinrotherham! Where would we be able to review which tests we currently have? I don't think I'm aware of the background info on what happened with the cookie banner UI states you've mentioned.
I've set it up to grab all the examples from the GOV.UK Frontend Review app
For example these listing pages:
Accordion examples
Back link examples
Breadcrumbs examples
Button examples
Bear in mind that some components have "hidden examples" which you can navigate to:
Hidden example: Button with custom attributes
Hidden example: Button (input) with attributes
Hidden example: Button (link) with attributes
Hidden example: Button (link) without href
You won't find these linked in the Review app, but can see them in each component's config:
src/govuk/components/button/button.yaml#L152
They're typically for edge cases used by tests
Research the feasibility of testing example code snippets with dynamic JavaScript and multiple states, such as the cookie banner and character count components.
I mentioned the Cookie Banner (see examples) as it's not exactly set up how you'd find it on a service and it fit the "multiple states" point you made under Potential tasks
Might be good to identify components like this and add Axe tests to the Full page examples instead?
https://govuk-frontend-review.herokuapp.com/full-page-examples/cookie-banner-client-side
https://govuk-frontend-review.herokuapp.com/full-page-examples/cookie-banner-essential-cookies
https://govuk-frontend-review.herokuapp.com/full-page-examples/cookie-banner-server-side
After chatting with @colinrotherham, it appears that the initial 3 'potential tasks' are now resolved (which is amazing!)
This issue will remain open, because there are future opportunities for further enhancement. I've listed 3 potential future enhancements in the issue description.
I'm a big fan of this approach. You can see our implementation here with our own accessibility site https://accessibility.civicactions.com/posts/automated-accessibility-testing-leveraging-github-actions-and-pa11y-ci-with-axe
We've also taken some steps forward with the Drupal community too https://www.thedroptimes.com/30928/drupal-takes-step-forward-in-accessibility-with-automated-testing-integration
This is all just part of shifting accessibility left.
We're enhancing our development processes by adding the accessibility-alt-text-bot to our GitHub Actions.
Here's the GitHub issue where we add it:
https://github.com/alphagov/govuk-frontend/pull/3818
Learn more about the GitHub enhancement.
Ended up here following a nice gov.uk design system blog post. Noticed the “future” point of “Research on automated testing focussed on screen reader outputs is completed and potential options are pursued” - bit of shameless self-promotion but would be happy to talk through a package I maintain for just that https://www.guidepup.dev, and as a starter for 10 of other options in the wild there is also a list of alternatives at https://github.com/guidepup/guidepup#similar which hopefully might be of use.
|
GITHUB_ARCHIVE
|
Can you draw arbitrary graphics in a (neo)vim buffer?
I'm looking for how, if possible, to draw arbitrary graphics in a vim buffer. The reason is that my git graph is too difficult to read:
* | 2021-02-11 [32a37ba] {Tama ...
* | 2021-02-11 [3ee4853] {Tama ...
* | 2021-02-10 [e1a262c] {Tama ...
|\|
| * 2021-02-09 [883be01] {Tama ...
| * 2021-02-09 [8632108] {Tama ...
| * 2021-02-09 [e4ef3a1] {Tam...
| |\
| | * 2021-02-07 [1330a96] {T...
| | |\
| | * | 2021-02-07 [abcaf22] {T...
| * | | 2021-02-09 [aa45850] {T...
| * | | 2021-02-09 [f827bde] {T...
| * | | 2021-02-09 [60ee4db] {T...
| | |/
| |/|
| * | 2021-02-06 [0f6ac11] {Tam...
At FOSDEM this year, I was blown away by Nick Black's presentation, which is given entirely through a terminal buffer, even though it looks like a full-blown video file. So it seems as if the technology is ready to give vim arbitrary graphics, at least as long as you are running vim from a suitable terminal emulator.
There's a related question here, but it is just asking how to blit images in the terminal. The answer to this question will likely depend on that one.
In part, it sounds like if you could port vim/neovim to not_curses, you could do a lot of really cool things :)
Obligatory disclaimer that I have no clue what the extent of this is (nor have I used it, nor did I write the code), but there's a C plugin on Windows that does background images in gVim. While I have no idea how to port any bit of that to Linux (or the terminal on either OS), that at least shows you can do some things, at least if you're willing to venture outside the terminal. There might be some terminal options too. To what extent you can use this, though, is an entirely different question, and one I can't answer.
of course, at one point, it might just be easier to port whichever Vim you use to not_curses and throw in some fancy rendering APIs. You might also get far just by messing around with multibyte characters instead of fancy rendering (at least depending on your end goal) - there's a number of characters that interconnect across multiple lines, like ▏(\u258f)
Wikipedia actually has a list: https://en.wikipedia.org/wiki/Block_Elements -- there's also a fullwidth slash (/), but it doesn't work well with emerging from the right side of 258f. There's also a demo (ish, it's probably functional, but proves that it can be done with unicode) of something like this using unicode characters here if that's more what you're looking for.
interesting. It's unfortunate it was written in such an arcane programming language; I was hoping it would just be some trivial replacements of characters the git log outputs, but unfortunately this approach is going to be difficult to port: (cond ((and before-merge (eq merge (car trunkc)))(setq before-merge nil)(magit-pg-stradd output (magit-pg-getchar branchright colour) str)) (memq (car trunkc) trunk-merges) (magit-pg-stradd output (magit-pg-getchar down colour) etc.
This perl script seems to use all the same connecting unicode block_elements, although the colors in the emacs plugin are much better.
|
STACK_EXCHANGE
|
If you're looking to add a Google registration/login option to your website, please follow these steps:
1. Go to https://console.developers.google.com/apis/credentials
2. Click on the "Select a project" drop-down, then on "New Project"
3. Choose a Project name & create it:
4. Select the newly created Project from the "Select a Project" drop-down, then click on the “OAuth consent screen” by hovering the mouse on "APIs & Services" from the left side menu.
5. On the “OAuth consent screen”, Choose User Type: External and click on Create button to proceed further.
6. Choose an "App Name" & add your email:
7. In case you've already indexed your website with Google, feel free to skip this step.
In case you haven't indexed your domain with Google yet, go to https://search.google.com/search-console/about in a new tab, click on start now and follow this guide:
The meta tags need to be added through the Website Editor. Once you've selected the website you'll be adding them to, you will need to go to Website Settings > Header Custom Script:
Just verify the domain through Google once you've added the code and it will be sufficient (you don't need to do anything after the 2:05 min mark in the above video)
If you are adding Google registration to a subdomain, please perform step 7 for that specific subdomain & the website carrying the bare domain.
(Subdomains are domain prefixes that are separated from the main domain with a dot (.) , if you're adding this to listings.domain.com, mls.domain.com *insert-any-word-here*.domain.com you are using a subdomain & will need to perform step 7 for the Bare domain, too, ie domain.com)
Please feel free to reach out to our Support Team at email@example.com in case you're not sure if you're using a subdomain.
And once that is done, go here:
Find the domain and click on Verification Details and you will see if the domain is verified or not.
8. Under authorized domains, you will need to enter your Bare domain name:
For example, if you're adding the Google registration button for listings.domain.com, you will only add domain.com to the "authorized domains" field.
Then you'll add your email to the developer contact info section:
And hit save and continue.
9. On the following screens "Scopes" and "Test Users", you will just click "save and continue" until you've completed the project creation & you see the "Back to dashboard" button on the "Summary" page.
10. After creating the project, click on “Credentials” from the left side menu to open the Credentials screen.
From here, you will go to "Create Credentials" > "OAuth client ID"
11. The OAuth client ID type will be "web application":
13. You'll scroll down a bit to the "Authorized redirect URIs" section, where you will add the same URL appended with /signin-google
And then, click on Create.
14. You will get a pop-up with your Client ID and Client Secret. Please email both to firstname.lastname@example.org so our Team can install this on the website.
15. Note that the created ID is restricted to test users:
16. Click on QAuth consent screen, located right under credentials, and click on Publish App:
It might take up to 4 days for this to be published, meaning, our Team will be able to install the IDs only once Google has completed the publication & our system can hence recognize the IDs.
|
OPCFW_CODE
|
Supervisor accuses other person of copying my PhD ideas and now he offers her my teachings next semester. What should I do?
I am on my 2nd year of PhD. Some months ago, a woman from another department had similar PhD ideas to me and my advisor's other student. Our supervisor was furious when he realised this and he accused of us sharing our work without authorization. It turns out that my colleague had informed her about our ideas because they were close friends and she copied them. When our supervisor learned this, he was still angry but was not able to do anything because he "didn't have any proof."
At the end of last semester, my supervisor talked with the woman who copied our ideas and invited her to co-teach our lectures next semester. I don't know exactly what her role will be or why her involvement is necessary. I thought this was unfair and I told him so. He was angry at me, but I don't regret it. Since that time our relationship is very cold and way more distanced. In fact, I am worried he wants to replace me with this student who stole my ideas.
So, I am wondering: is my anger justified? Should I have concerns about the woman who copied (or at least, inspired herself too much from) our ideas? And at my supervisor for offering her a place at our lectures? I know my supervisor is a head of a department and it is him who makes decisions but I felt that this situation was unfair. Additionally he would accuse me of being unfriendly and not cooperative.
I guess I'm not following what your advisor did. You were TA'ing for a class that your advisor teaches, and now this other student will also be a TA for the same class? Or?
It sounds like your supervisor wants to make this person a collaborator, not a competitor. This is pretty common if the supervisor has capacity to take on more students. The other student is interested in a similar topic. I don’t know what “PhD ideas” you feel possessive of but usually I have more ideas than people to carry them out and would always like more smart students.
Yes,the one that he was angry at 6 months ago.it is a bit complicated that's why I wanted to ask for an advice. This woman is from other department and her advisor's responsibility is to find her lectures to teach, not other advisor's responsibility.I wonder why my advisor was angry at her 6 months ago and currently he acts like nothing happened and instead accuses me of being unfriendly, suspicious and not willing to work in a group. My arguments are that I play fair. It was him to notice the similarities in our PhD's proposals so I don't understand why he let her work with us.
But did she steal your lectures? Or do you both have teaching assignments now?
Thank you for the answer, I understand that my advisor wants more students. For the last 1,5 years his subjects that he let me teach at has been very popular among master students. With advisor we created some scenarios of classes using different techniques that he never used at this class before. It was partly my idea. And it is a part of my PhD research. He agrees for this in 2021. What I am saying is that he introduces someone from different department to our class that was created partly with my ideas in 2021 and is a part of my PhD research in official documents.
We both will have teaching assignments now on a topic that me and my advisor created in 2021 which is part of my phd research. And he agreed on this in 2021. I don't understand what she will do in this project. I do not know why she does not search for her lectures with her own advisor.
The environment you describe where advisors are responsible for teaching assignments is a bit foreign to me, I'm used to this being mostly up to departments and students themselves. Overall, it's quite unclear to me what all the relationships and associations are between the characters involved, and I'm a bit doubtful that someone from "outside" is going to be able to answer your question. Is there any reason you cannot simply have a conversation with your supervisor about this? You also talk about worry, and then anger... What is it you actually want to achieve?
This is very confusing. Why do you care about having more help teaching? Less teaching= more time for research= better career prospects
Hello, thank you all for those answers. Maybe there is a problem in me. I need to re- evaluate my views. Dawn you are right- less teaching is more time for research. The reason i reacted like this is that the whole teaching class and its main idea was created by me and my advisor in 2021 and its a very small part of my phd research- results of student work from this class. Otherwise i wouldnt wrote this post.
"I wonder why my advisor was angry at her 6 months ago and currently he acts like nothing happened" - anger is a feeling and not necessarily something that lasts for long. There's nothing unusual about leaving anger behind. If your supervisor can hire more people to do the work there (which may be good for you, see above), and this person is qualified to do this (maybe more than others who are available), to me it's normal that she is taken on.
"I don't know exactly what her role will be or why her involvement is necessary." It would probably have been better to ask your supervisor first about this in a neutral way, before complaining about it. Very often, particularly in a work environment, we can work better with others if we try to not let negative feelings guide our actions and communication.
In science, I find it easy to stole other ideas. But they are just ideas. What cannot be stolen is the hard work. I am a PhD student in math. And usually, when I work on a problem, an idea is at the beginning. However, even though I have an idea, a lot of weeks of hard work is ahead of me to write down the formal proof. There is always room for more collaborators. Now, after 2 years of working on my PhD, I realised I can't do the work alone! Even on my own ideas! And I am willing to share my ideas to others especially because I can't handle so much work, it has to be splitted among many.
One lesson to learn for life (and it takes time): get angry slowly. Find the facts first. Understand the situation. And then, if you still believe you should get angry, but always in a controlled way. Yes, easier said than done.
The situation seems to have multiple components. A leaked piece of idea. A unreliable colleague (who is that? someone from your group?), a second group, and PhD student. And you.
Now, the problem for you is to find out what is the situation with this idea and to which degree it is actually a problem for you. Does this allocation of the idea to the other person endanger your PhD? Would you want to give that teaching instead?
You should find out exactly what is going on, and perhaps ask your supervisor what he thinks the status with this idea is, and how he sees the collaboration with the other group, and what, if at all, your role is within this collaboration, as well as what effect it has on your PhD.
All this should happen in a sober, unexcited, calm and factual tone. Once you understand what your supervisor's thinking is, you can decide how you wish to proceed further.
In my own opinion, the only case where it is worth contemplating making a more forceful point is if that leak may endanger your own PhD; not being involved in the teaching is an ego hit, but from my experience not a hill worth to die upon. In this case (where your PhD is affected), again, you need also bring that up with your supervisor, possibly in the same meeting and discuss what the consequences of this leak might be.
You might consider reminding him in a side remark that it was not you that decided to share the idea, to refresh his memory that you are just affected by the situation that emerged as much as him.
This is a relatively diplomatic approach, but from what I read between the lines about the situation, the supervisor themselves may have lost control over developments and does not really know how to reclaim control over the idea against the other group. I think it is worth trying to show him you are on the same team and see where that goes.
I am surprised that most answers are acknowledging neither the power dynamics at play within the supervisor-student relationship, nor the toxic and immature behavior of the supervisor. In a PhD, the supervisor (who is additionally the head of the department) has a disproportionate amount of power over theirs students. It is thus the supervisor's responsibility to provide good working conditions for the phd students, together with making sure that communication is maintained and healthy, in order to overcome any problem arising during the PhD. From the testimony, however, we understand that the supervisor does not show responsible behaviour and does not treat his students with care and consideration, that are needed to go through PhD. The student here is in their own right to feel disturbed and angry by the situation, while the supervisor is just putting oil on the fire. This behaviour for a PhD supervisor is not responsible and is immature.
Thank you for this answer. What would you suggest on doing? Actually it is not the 1st time I feel 'weird' or uncomfortable when dealing with my advisor. I can not change him right now because im on my 2nd semester of 2nd year now. During the whole 1st year I internally felt that some actions and situations were not right. But I have never had the comparison between other supervisors. I always say to myself that 'it is how it needs to be'. but on the other hand my intuition never fails me. I have just also written the 2nd post on some situation which also triggered me in last semester.
I don't find this answer clear or useful; there are no other answers (except for a spam answer posted after yours), so "most answers" doesn't make any sense. You describe the purpose of a PhD supervisor and their power, but do not make clear what "good working conditions" are not provided here. There is no "testimony" here, we aren't in court; it's not clear in what way the supervisor has not treated their students with care and consideration. What is irresponsible or immature? Why should the student feel disturbed and angry?
(to be clear, the "spam answer" I refer to is a deleted answer that most readers will not see; I'm not referring to any not-deleted answers which will have been posted after that comment)
|
STACK_EXCHANGE
|
The dnssec-tools patch for webmin enables the zone administrator to use the tools from the dnssec-tools suite to manage DNSSEC operations on their zones. The following screenshots highlight the new features that have been made available in Webmin with the patch applied.
Note: This feature is currently only available on the CentOS platform. The dnssec-tools package (which can be found in the EPEL repository) must also be installed.
DNSSEC status displayed in the zone listing.
The DNSSEC status can be one of the following:
- Signed: If the zone is signed and managed by DNSSEC-Tools
- Unsigned: If the zone is unsigned or not managed by DNSSEC-Tools
- In ZSK Roll: If the zone is in the midst of a ZSK rollover operation
- In KSK Roll: If the zone is in the midst of a KSK rollover operation
- Waiting for DS: Waiting for the administrator to notify the "rollerd" daemon that the DS record has been published in the parent and that sufficient time has elapsed since the publication of the new DS record.
Manage Rollover operations using rollerd
- The output from the dnssec-tools 'lsdnssec' command is displayed in order to provide information on the current phase of ZSK and KSK rollover.
- A zone may be only in one rollover operation at any given time, but zones may be safely resigned at any time
- DNSSEC status and any DNSSEC-Tools meta-data for a zone may be disabled at any time. However it is the responsibility of the zone administrator to manually remove any DS records from the parent zone prior to disabling DNSSEC for a zone.
- A zone that is in a KSK rollover operation will eventually need to have a DS record pointing to its new KSK. 'rollerd' will need to be notified when the parent zone has had the new DS record published for a sufficient length of time.
- The UI makes the KSK data readily information, and provides a way for the operator to notify rollerd of the DS publication event and 'Resume KSK Roll'
Migration to DNSSEC-Tools.
Webmin already has some support for DNSSEC, but lacks support for rollover operations. The dnssec-tools patch for webmin enables the operator to migrate a zone that uses the legacy webmin-managed DNSSEC zone to DNSSEC-Tools.
The parameters that can be configured are
- Administrator email address: Address to which notification messages from daemon programs are to be sent
- key algorithm: algorithm used to sign the zone
- KSK length: key length for the KSK
- ZSK length: key length for the ZSK
- Use NSEC3: whether zones should be signed using NSEC or NSEC3
- Signature validity period: The end time for new signatures in (+) seconds
- KSK Rollover interval: The interval between two scheduled KSK rollover operations
- ZSK Rollover interval: The interval between two scheduled ZSK rollover operations
- Period between re-signs: How often to resign zones.
|
OPCFW_CODE
|
ClickHouse is a popular database with nearly 25,000 GitHub stars and stylizing itself as “a free analytics DBMS for big data,” ClickHouse has achieved widespread adoption and helped engineers everywhere perform real type analytics at scale. Benchmarked at 100x faster than Hive or MySQL, ClickHouse is adopted by many engineering teams to serve queries at very low latencies across large datasets. To achieve this performance at scale compared to standard data warehouses like Snowflake, it makes some architectural tradeoffs that users should keep in mind. For this reason, users should consider conforming the data to ClickHouse best practices before ingestion. Decodable makes it simple to prep the data so ClickHouse performs at its best.
Here are 5 situations where you should pre-process your data before ingestion into ClickHouse:
1. Changing Data
When your data is in another database and you need to get it into ClickHouse, some users try to keep data fresh by running frequent batch jobs that run every couple minutes. This can lead to undesirable outcomes like heavy load on the source system, high compute costs, and unpredictable data consistency. A better option is to use Decodable’s easy CDC (change data capture) capability, capturing new data in real-time, formatting it in real-time, and delivering to ClickHouse in milliseconds.
2. Denormalize or Normalize
You face a tradeoff here on performance and cost. While normalized schemas are always more storage efficient (smaller) compared to denormalized ones, they also require joins at query time that can be expensive in both compute dollars and latency. If you need ultra low query latency and want to minimize compute cost, tables can be combined into a single table, unneeded columns can be filtered out, and ingested in this format. On the other hand, if you are not as latency sensitive, want lower storage costs, and want to maintain normalization, you may want to split streaming events into multiple events and route them into multiple tables. Decodable supports both of these ingest strategies.
3. Buffer and DeDuplicate
Despite being a lightning fast database, ClickHouse does not perform well when ingesting rows one at a time (more here). Streaming events one at a time can be costly in terms of processing dollars and performance, so ClickHouse sets their default ingestion size at 1 million rows. Users will benefit from using stream processing to buffer streaming data, dumping it to ClickHouse when the optimal batch size is reached. It’s also important to consider implementing exactly-once in your streaming pipeline, which may provide cost, complexity, and latency advantages over pushing this deduplication processing into ClickHouse. Decodable easily buffers rows and enforces exactly-once by default.
4. Data Formatting
ClickHouse can especially shine if you optimize the data. The performance difference between an UINT32 and FLOAT64 can be 2x, which promises rewards to users who convert data types in flight before ingesting. Reducing cardinality where possible, aggregating data to reduce the number of data points, and grouping where allowed before ingestion will also drastically improve performance. Masking PII and correcting timestamp/datetime types are another frequently required formatting step. In addition, inserting raw data is often not convenient because very specific commands must be used to ensure data is spread across multiple nodes. More examples of recommended data formatting are here. Decodable can easily format for ClickHouse
5. Clean and Verify Data
Records often contain mixed message formats and inconsistent field names. ClickHouse users often utilize a dead letter queue to capture bad records for inspection and automate correction. Pre-processing with Decodbale will guarantee messages are well-formed and consistent before insertion into ClickHouse.
If you are a ClickHouse user today and you have a streaming platform (Kafka, Kinesis, Pulsar, RedPanda, etc), you can log into Decodable.co and easily connect and transform your data today. If you need help, set up a free session with one of our experts here and we’ll assist in creating your pipeline.
You can get started with Decodable for free - our developer account includes enough for you to build a useful pipeline and - unlike a trial - it never expires.
Join the community Slack
|
OPCFW_CODE
|
When you use Zip2Go, it captures most all of the critical files that are used with a given project. It's purpose is to allow tech support to re-create the environment that you are working in to help troubleshoot. But we have found another use for Zip2Go. It is a simple tool for you to preserve most of your critical files that have been customized such as tool libraries, material libraries, machine, control and posts, operation defaults, .MTB (tool bar file), .KMP (Keyboard Mapping) and the ever so important .CONFIG.
The problem is that in normal use, it would only capture the files that support the machine that you have loaded. But if you load all of the machines that you use (or have modified), it will capture all of those files as well, which would give you most everything you would need to re-build your setup should something disastrous would happen, such as a hard disk failure.
So if you want to try this out:
1. Start a new Mastercam session and add all of your machines. No need to load the Mastercam default machines.
2. Save the file with a name such as mastercam_backup_jan_15_2011.
3. Create some simple geometry, and attach a few basic operations. For similar machine types, such as all mills, copy and paste the operations from machine to machine. For Lathe, create a simple profile on another level and create a few lathe operations to it. Do the same for Router or Wire EDM if you have those licenses. Also, you can add any custom tool geometry you want to make sure you preserve, any fixtures, vises, tombstones, etc. organized on different levels.
4. Do a final save and then select all and run the post.
5. The final step is go to Help, Zip2Go and select "Create Zip2Go" - Note where the .Z2G file is placed. You can use the file, manage archive to rename or move the .Z2G to a network location or thumb drive.
Understanding the new x5 folder structure
Mastercam x5 utilizes three distinct locations for program files, shared data files and user data. The idea behind this change is to make it easier to install and maintain Mastercam from an IT standpoint.
The Program Folder
For 32 bit Windows, the folder is [C:\Program Files\mcamx5]
For 64 Bit Windows, the folders are [C:\Program Files (x86)\mcamx5] AND [C:\Program Files (x86)\Common Files\Mastercam]
Generally, there is no need to navigate to this folder. But knowing where it is located can be helpful.
Items located in this folder:
Mastercam.exe and all related files, dll's etc. - also note the sub-folders shown to the right.
The Shared mcamx5 Folder
XP [C:\Documents and Settings\All Users\Shared Documents\shared mcamx5] VISTA AND WIN7 [C:\Users\Public\Documents\shared mcamx5]
This folder contains:
➡Machine and Control definitions
➡Tool and material libraries
The My mcamx5 Folder
XP [C:\Documents and Settings\<Username>\My Documents\my mcamx5] - VISTA and WIN7 [C:\Users\<Username>\Documents\my mcamx5]
This is your data folder, where MCX-5 files and NC files default to, unless otherwise specified.
These folders are quickly accessible from the "File" menu in Mastercam. The shortcut "Open Shared Folder" and "Open Use Folder" will take you directly to the contents.
|
OPCFW_CODE
|
chef provision -d option does not work as expected
I have this problem both with vagrant and vsphere.
I have the following machine definition in provision\recipes\sparkler_create.rb :
machine 'sparkler' do
recipe 'apt::default'
end
When running it as :
chef provision --no-policy --recipe sparkler_create
All goes well - the machine is created in VSpere as expected. From the documentation, I expect to be able to run :
chef provision --d --no-policy --recipe sparkler_create
And have it get destroyed and yet it does not and reruns action :install.
± |test_provision ?:2 ✗| → chef provision -d --no-policy --recipe sparkler_create
[2015-12-04T09:31:48-05:00] WARN: found a directory test in the cookbook path, but it contains no cookbook files. skipping.
Compiling Cookbooks...
Recipe: provision::vsphere
* chef_gem[chef-provisioning-vsphere] action install (up to date)
* chef_gem[chef-provisioning-vsphere] action installWARN: Unresolved specs during Gem::Specification.reset:
mini_portile (~> 0.6.0)
WARN: Clearing out unresolved specs.
Please report a bug if this causes problems.
(up to date)
Recipe: provision::sparkler_create
* machine[sparkler] action converge[2015-12-04T09:31:52-05:00] WARN: Checking to see if {"driver_url"=>"vsphere://xxxxxxxxxxx/sdk?use_ssl=true&insecure=true", "driver_version"=>"0.8.2", "server_id"=>"xxxxxxxxxxxxxxx", "is_windows"=>false, "allocated_at"=>"2015-12-04 14:03:07 UTC", "ipaddress"=>"<IP_ADDRESS>"} has been created...
establishing connection to xxxxxxxxxxxxxxxxx
[2015-12-04T09:31:53-05:00] WARN: returning existing machine
...
All works well if I add the action :destroy in the cookbook (I actually have a sparkler_destroy )
± |test_provision ?:2 ✗| → chef provision -d --no-policy --recipe sparkler_destroy
[2015-12-04T09:39:52-05:00] WARN: found a directory test in the cookbook path, but it contains no cookbook files. skipping.
Compiling Cookbooks...
Recipe: provision::vsphere
* chef_gem[chef-provisioning-vsphere] action install (up to date)
* chef_gem[chef-provisioning-vsphere] action installWARN: Unresolved specs during Gem::Specification.reset:
mini_portile (~> 0.6.0)
WARN: Clearing out unresolved specs.
Please report a bug if this causes problems.
(up to date)
Recipe: provision::sparkler_destroy
* machine[sparkler] action destroyestablishing connection to XXXXX
- Delete VM [RateReview/sparkler]
- delete node sparkler at https://XXXXXX/organizations/devops
- delete client sparkler at clients
Thought I would also add :
○ → chef --version
Chef Development Kit Version: 0.10.0
chef-client version: 12.5.1
berks version: 4.0.1
kitchen version: 1.4.2
chef provision relies on you passing options from its context object to your actual recipe. You need to do something like:
# The context holds the values you passed in on the command line
context = ChefDK::ProvisioningData.context
machine 'sparkler' do
recipe 'apt::default'
action(context.action)
end
What documentation are you following?
I'm looking at the documentation for "chef provision" : https://docs.chef.io/ctl_chef.html#chef-provision
I tried what you mention and everything works - which is great !! Thanks you !!
But also begs the question - What documentation are you following ? My background is Ops and Bash. I still get lost in ruby and objects. I did try going through the chef-dk code.
And MANY Thanks for the quick response !!
I wrote chef provision but I sometimes forget the details of everything. I included an example that uses most features of chef provision in this blog post: https://www.chef.io/blog/2015/08/18/policyfiles-a-guided-tour/ so that's what I look at when I need a reminder.
context = ChefDK::ProvisioningData.context
# Set the port dynamically via the command line:
target_port = context.opts.port
with_driver 'vagrant:~/.vagrant.d/boxes' do
options = {
vagrant_options: {
'vm.box' => 'opscode-ubuntu-14.04',
'vm.network' => ":forwarded_port, guest: 80, host: #{target_port}"
},
convergence_options: context.convergence_options
}
machine context.node_name do
machine_options(options)
# This forces a chef run every time, which is sensible for `chef provision`
# use cases.
converge(true)
action(context.action)
end
end
Anyway, I'll notify the documentation team about this and see if I can get an improved example included on the page you linked.
I've filed this bug in the chef-docs issues, so I am closing this one.
https://github.com/chef/chef-web-docs/issues/768
I have been ignoring policyfiles for now - and this is probably why I missed the above blog entry (or glanced over it quickly).
No time like the present though !! Thanks again.
|
GITHUB_ARCHIVE
|
By the end of this tutorial, you'll have built a working light device with pushbutton switch, connected it to ThinCloud, and be able to send it commands from a GraphQL client. You'll also configure the light to send state change updates to Thincloud when the pushbutton is pressed to notify the cloud that the light's state has changed - such as would need to occur when the light is turned on or off.
This tutorial is suited for developers just starting out with Thincloud who want to follow along a basic example to build a device and get it connected and operational with ThinCloud. It assumes no prior knowledge or use of Thincloud, but developers are encouraged to review the Thincloud documentation for a deep-dive on the concepts covered in this tutorial.
ThinCloud provides cloud connectivity for consumer devices, out-of-the-box integrations with Yonomi, Google Assistant, and Amazon Alexa, user-device access roles and permissions, and much more. For more information on ThinCloud, please see the Yonomi website.Recommended Prerequisites
While this training includes all the code required to build your device and does not assume mastery of any specific technology, familiarity/experience with the following technologies will be helpful (links provided below):
- C/C++ Language (used to write device firmware)
- GraphQL & GraphQL Playground
- MQTT - here or here
- mTLS Communication and x509 Certificates
The list above links to external sites that host introductory tutorials and training on each of these topics.
Do I have to actually build hardware to use Thincloud?ThinCloud Alternatives
No, you don't. If you want to learn how to use Thincloud through a tutorial but don't want to fuss with building hardware or would rather work with IDE-based examples instead of a C++-based-examples, check out the Building a Simple Virtual Device with Thincloud Tutorial, which covers the same steps as this tutorial but uses a virtual device running in MQTTBox instead of a real one.What Will We Build?
Before we start building, let's figure out what we want our light device to do.
This will be a simple light device. It will consist of an LED and a pushbutton, connected using wires and resistors to an ESP32 Microcontroller. This light will be connected to the internet and securely registered to the Thincloud IoT Cloud backend. The light will then be associated with a user, who will have access to control it using a web-based client.
The pushbutton will turn the light on or off when pressed and send an update to the cloud (Thincloud) when the light's state changes from on-to-off or from off-to-on.
Using pulse width modulation, we'll add the ability to set a brightness level on the LED. Last, the light will support the ability to blink on and off 1-10 times at a fast, medium or slow rate of speed using a "blink" command.
At the end of the project we'll have built a device that communicates over MQTT to Thincloud and can be controlled both locally (physically) and remotely using a web-based client:
We're only building one light device for this project, so the process we'll use will be tailored to that scale, but what if we were building 100?
Or 1, 000?
Or 5, 000 a day?
Thincloud was built to handle IoT deployments from startup-scale to enterprise-scale, efficiently and cost-effectively, and there are some important concepts that smart-device makers will consider as they build their products. If you'd like to know more about the IoT device development process and how this project would look at a larger scale than just one device, check out this guide on The IoT Product Development Process.
If you're already well-versed in that process then continue to the next section of this tutorial. Read the full tutorial here.
What to expect:Step 1: Build the DeviceStep 2: Acquire CertificatesStep 3: Configure ThinCloudStep 4: Test MQTT ConnectionStep 5: Create The End User AccountsStep 6: Complete Your Light Device
|
OPCFW_CODE
|
Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
What's on the Near-Term Horizon (12-24 Months)
Almost every storage pundit on the planet says we are just on the horizon for optical storage that will solve the world’s problems. The problem is that this has been said off and on for a number of years. Perhaps this time it's true, but I do not see it happening within two years for the following reasons:
- The cost for optical storage will be high initially, which limits its usage
- If the cost is high, the only place it might be used is where organizations can afford the cost (large enterprises and/or the U.S. Government), which means it must be proven technology with a long shelf life and low bit error rates
This might be a boring prognostication, but what I see coming in the near-term is more of the same. I expect tape density to increase with LTO and enterprise tapes. In fact, if you look at the trend in tape density over the last few years, it is increasing at a faster rate than disk density (60 GB native for StorageTek 9940 tapes in 2000, and 200 GB native in 2002). This trend will continue given that some significant technology improvements have been made in the development of high-density tape and tape drives.
What's also significant is the actual amount of data written on the tape with compression. The trend from some of the HPC sites I work with is significant. In one example the site was seeing 1.3 to 1 compression using the 60 GB tape drive. After moving the same data to the 200 GB tape drive, they saw 1.6 to 1 compression. That means the capacity of the tape increase was not 140 GB but rather 242 GB, or 4x over two years. Of course there are many applications that cannot be compressed, such as movies, audio, some pictures, and others, but a great deal of data on tape is compressible.
What is on the near-term horizon from a bunch of different companies is power management ATA storage. The idea is that disks running without power can be easily powered on when someone needs to write/read data. Serial ATA (SATA) drives have a power management interface that allows this, and laptop drives have had this power management interface for years.
What's on the Mid-Term Horizon (24-36 Months)
This is a hard area to predict for several reasons:
- Standards groups do not always follow schedules — I know of a number of products that were delayed because of the change from 1-Gig FC to 2-Gig FC. Whatever you say about a new storage technology, the interface will have to be something that follows the standards. Of course, there will be market leaders and followers, but still, this is not the 1970s.
- What are going to be the tradeoffs between density and speed? Will two technologies develop that will allow each, or will one provide both? Some optical technologies are expected to provide both. Something that does not have high performance is going to be an issue given the time to migrate the data to the next product.
|
OPCFW_CODE
|
10-09-2013 04:46 PM
I'm trying to access a couple of switches (4100) to run "supportshow" cmd, I have not IP, I can not get prompt through serial cable or network.
Can anybody help me??
10-09-2013 08:27 PM
hear to me you have generally no access to those switches, is correct ?
->can not get prompt through serial cable
is most probable you have wrong serial cable or wrong value set in the COM configuration.
anyway, this is a common topic, use the search feature here in the community, and try to resolve serial connection.
then you can continue with other step
10-10-2013 11:41 AM
What characteristics should the cable have??
It was supplied with the switch. I don't know the characteristics for the pins.
It is another way to get the support show?
With out access to the CLI Serial or other management interface like BNA, NO
How can I know the Switchs IP's??
Will they should be registered in a document or database.
If you don't have a document, try your CMDB/ DNS/ DHCP to see if you spot something which might point to your switches.
If you still haven't found them, but you know were the management port is plugged into (ie switch and port) you could ask your network team which vlan is assigned to the port and if there's an mac address present.
If so you could try an netscan for that vlan in the hopes some ports advertise themself (22/23/80 are usual suspects).
10-11-2013 01:46 AM
A cable should have been provided with the switch, but if you do not have it you should get a cable with the following characteristics:
Serial Port Specifications:
The serial port is located on the port side of the switch. It is a three-wire RS-232 port with a DB-9 male connector, designed to connect to a DTE port.
The port requires a straight serial cable with a female 9-pin subminiature-D connector. Only pins 2, 3, and 5 are supported.
PIN Signal Description
1 Not supported Not supported
2 RxData Receive data
3 TxData Transmit data
4 Not supported Not supported
5 GND Logic ground
6 Not supported Not supported
7 Not supported Not supported
8 Not supported Not supported
9 Not supported Not supported
In a Windows environment:
Bits per second: 9600
Stop bits: 1
Flow control: None
|
OPCFW_CODE
|
# -*- encoding: utf-8 -*-
#
# :authors: Arturo Filastò
# :licence: see LICENSE
from twisted.python import usage
from twisted.internet import defer
from ooni.templates import scapyt
from scapy.all import *
from ooni.utils import log
class UsageOptions(usage.Options):
optParameters = [['backend', 'b', 'google.com', 'Test backend to use'],
['timeout', 't', 5, 'The timeout for the traceroute test'],
['maxttl', 'm', 64, 'The maximum value of ttl to set on packets'],
['dstport', 'd', 80, 'Set the destination port of the traceroute test'],
['srcport', 'p', None, 'Set the source port to a specific value']]
class ParasiticalTracerouteTest(scapyt.BaseScapyTest):
name = "Parasitic TCP Traceroute Test"
author = "Arturo Filastò"
version = "0.1"
usageOptions = UsageOptions
def setUp(self):
def get_sport():
if self.localOptions['srcport']:
return int(self.localOptions['srcport'])
else:
return random.randint(1024, 65535)
self.get_sport = get_sport
self.dst_ip = socket.gethostbyaddr(self.localOptions['backend'])[2][0]
self.dport = int(self.localOptions['dstport'])
self.max_ttl = int(self.localOptions['maxttl'])
@defer.inlineCallbacks
def test_parasitic_tcp_traceroute(self):
"""
Establishes a TCP stream, then sequentially sends TCP packets with
increasing TTL until we reach the ttl of the destination.
Requires the backend to respond with an ACK to our SYN packet (i.e.
the port must be open)
XXX this currently does not work properly. The problem lies in the fact
that we are currently using the scapy layer 3 socket. This socket makes
packets received be trapped by the kernel TCP stack, therefore when we
send out a SYN and get back a SYN-ACK the kernel stack will reply with
a RST because it did not send a SYN.
The quick fix to this would be to establish a TCP stream using socket
calls and then "cannibalizing" the TCP session with scapy.
The real fix is to make scapy use libpcap instead of raw sockets
obviously as we previously did... arg.
"""
sport = self.get_sport()
dport = self.dport
ipid = int(RandShort())
ip_layer = IP(dst=self.dst_ip,
id=ipid, ttl=self.max_ttl)
syn = ip_layer/TCP(sport=sport, dport=dport, flags="S", seq=0)
log.msg("Sending...")
syn.show2()
synack = yield self.sr1(syn)
log.msg("Got response...")
synack.show2()
if not synack:
log.err("Got no response. Try increasing max_ttl")
return
if synack[TCP].flags == 11:
log.msg("Got back a FIN ACK. The destination port is closed")
return
elif synack[TCP].flags == 18:
log.msg("Got a SYN ACK. All is well.")
else:
log.err("Got an unexpected result")
return
ack = ip_layer/TCP(sport=synack.dport,
dport=dport, flags="A",
seq=synack.ack, ack=synack.seq + 1)
yield self.send(ack)
self.report['hops'] = []
# For the time being we make the assumption that we are NATted and
# that the NAT will forward the packet to the destination even if the TTL has
for ttl in range(1, self.max_ttl):
log.msg("Sending packet with ttl of %s" % ttl)
ip_layer.ttl = ttl
empty_tcp_packet = ip_layer/TCP(sport=synack.dport,
dport=dport, flags="A",
seq=synack.ack, ack=synack.seq + 1)
answer = yield self.sr1(empty_tcp_packet)
if not answer:
log.err("Got no response for ttl %s" % ttl)
continue
try:
icmp = answer[ICMP]
report = {'ttl': empty_tcp_packet.ttl,
'address': answer.src,
'rtt': answer.time - empty_tcp_packet.time
}
log.msg("%s: %s" % (dport, report))
self.report['hops'].append(report)
except IndexError:
if answer.src == self.dst_ip:
answer.show()
log.msg("Reached the destination. We have finished the traceroute")
return
|
STACK_EDU
|
how do you aggregate data in a Udi-style SOA architecture?
We are implementing a SOA-architecture the Udi Dahan way, which means that services are business-aligned autonomous components (we have few services, each own a part of the domain and they are not allowed to call each other). We are using nservicebus pub/sub. I am trying to figure out the best way to handle "cross-cutting" data concerns.
Let me give you an example:
We have a game service which the user can use to play games. The games have deadlines and we want to warn when the deadline closes by sending the user a mail. The mail will contain data from multiple services. Now, since services can not call other services, I see a few different approaches:
1) Handle it in the game-service
Publish enough messages from the other services so that the game-service can store its own version of the data it needs and therefore do not need to dependent upon data from other services when it wants to compose the mail.
Cons:
-More messages need to be published
-Denormalization of data
-Fussy data-ownership (one fact in multiple places)
-Cumbersome to add new data to the mail (more messages, store the stuff in the game service)
2) Create an aggregating service.
Create an aggregating service which will listen to service events, store everyting it needs to create the mail and fire it off when the game-service notifies that the deadline is closing.
Cons:
-Pretty much the same as 1), but data-ownership is a lot more clear
3) Create a client
Create a "client" (this client will not have a gui and will be nservicebus hosted, pretty much the same as a service, but also something very different). The client will subscribe to bus-events and just like 2) it will get notified by the game service when the deadline is closing. The client will compose mail by querying the services it needs to gather the information it needs.
Cons:
-A client service (fuzzy architecture)
-Everything needed to compose the mail must be queryable (exposed)
How did you do this in your great pub/sub Udi style SOA architecture? :-)
If you can do HTML email, then have your email component grab the HTML output of a URL that does the regular form of composition. If you can't do HTML, then you'll need the IT/Ops service to collect the information (but that is done via in-process communication to components from the various business services that are installed on the same endpoint).
Well, as far as I understand Udi Dahan (especially in his more recent writings) option 3 would come closest. Every bit of information stays with the owner and the client is just a mere aggregator.
|
STACK_EXCHANGE
|
Assistant Professor, Department of Geosciences
PhD. - University of Calgary, Canada
MSc. - University of Calgary, Canada
BSc. - University of Regina, Canada
Dr. Delparte has an extensive background in the applications of GIS and remote sensing to the fields of geosciences, resource management and conservation/environmental planning. Dr. Delparte's current research focus relates to visualization, 3D modeling and analysis. She is using 3D platforms to visualize her research work with photogrammetry, Structure from Motion (SfM), LiDAR and point-cloud generation from gaming devices. Specific research applications relate to avalanche flow modeling and hazard mapping, terrain models, land cover change, precision agriculture and image analysis.Her professional experience also extends to government and industry sectors.
- Delparte, D., Peterson, M., Perkins, J. and Jackson, J. 2013. Integrating Gaming Technology to Map Avalanche Hazard. 2013 International Snow Science Workshop, Chamonix Mont-Blanc.
- Delparte, D., M. Peterson, J. Jackson and J. Perkins. 2012. Modeling and visualizing avalanche flow using genetic algorithms and OpenGL. International Snow Science Workshop 2012. Anchorage, AK.
- VanZandt, M., Delparte, D., Duvall, D., Penniman, J., 2013. Nesting Distribution and Habitat Selection of the endangered Hawaiian Petrel (Pterodroma sandwichensis) on the Island of Lanāi. Waterbirds. (Accepted).
- Wu, J., Delparte, D., Hart, P. 2014. Movement patterns of a native and non-native frugivore in Hawai‘i and implications for seed dispersal. Biological Conservation. (Accepted).
- Melrose, J. and D. Delparte. 2012. Hawaii County Food Self-Sufficiency Baseline. County of Hawaii Research and Development Department. 212pp.
- Giambelluca, T., Q. Chen, A. Frazier, J. Price, Y. Chen, P. Chu, J. Eischeid, D. Delparte. 2013. Online Rainfall Atlas of Hawaii. Bulletin of the American Meteorological Society. 94, 313–316 (http://dx.doi.org/10.1175/BAMS-D-11-00228.1)
- Delparte, D. 2011. Small-Scale Geospatial Data Repositories: If You Build It, Will They Come? Position Paper and Presentation for the First NSF CyberGIS Project All Hands Meeting, September 28-30, Oak Ridge National Laboratory, Tennessee
- Delparte, D., Jamieson, B. and Waters, N., 2008. Statistical runout modeling of snow avalanches in Glacier National Park, Canada. Cold Regions Science and Technology. 54, pp.183-192.
- Delparte, D. 2008. Avalanche Terrain Modeling in Glacier National Park, Canada. PhD Thesis. Department of Geography. University of Calgary, Calgary, AB, Canada, p 179
- Delparte, D. M. 2006. The Use of GIS in Avalanche Modeling. Knowledge Media Technologies, First International Core-to-Core Workshop. No. 21, Dagstuhl, Germany.
- D'Eon, R.G. and Delparte, D. M. 2005. Effects of radiocollar position and orientation on GPS-radiocollar performance, and the implications of PDOP in data screening. Journal of Applied Ecology. 42(2), pp. 383-388
Donna Delparte, PhD
Department of Geosciences
Idaho State University
921 S 8th Ave, STOP 8072
Pocatello, ID 83209-8072
|
OPCFW_CODE
|
Is it safe to block redirected (but still linked) URLs with robots.txt?
I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too.
So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM).
I would like to block the unclean URLs with robots.txt.
The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?
If you disallow the unclean URLs in robots.txt, polite bots will no longer visit these URLs. So they will never notice that you 301-redirect them to other URLs that they’d be allowed to crawl. Bots that don’t know your clean URLs yet would only be able to visit these pages when they are directly linked to with the clean variant (not the blocked unclean variant).
So you should not block them in robots.txt.
As you are 301-redirecting your unclean URLs to corresponding clean URLs, you don’t need to do anything. Bots will know what to do. If some search engines still have some unclean URLs indexed, it should only be a matter of time until they update their index.
The redirection won't work on those blocked URLs. After adding 301 redirection you will not need to apply canonical as well. This won't effect the websites I have seen big brands changing the URL and make millions of redirection just like the SEOmoz did after changing its name to Moz.
There's no to need to block anything in robots.txt these days, simple use rel="canonical" on your pages and you never risk duplicate pages regardless if its accessible via unclean or clean because the canonical will tell Google the preferred.
In regards of the indexed non-clean URLs if you use canonical's these will deindex/update themselves however if you don't have page replacements and you just want to remove them then using both noindex in your head HTML and disallow: /unclean-url/ in your robots.txt (Google recommends both robots.txt and noindex usage).
Also for SEO purposes you should correct those URLs as soon as possible as your losing page juice due through the 301 redirect.
Ok thanks, but do you think it will affect the site or not? I'm working with a Budget and there are limits to what I can do ATM. Thanks.
How would the Googlebot stumble upon noindex, when it isn’t allowed to crawl the page in the first place?
|
STACK_EXCHANGE
|
Use C# or C to develop your OPC Servers with the OPC Server Toolkit! Integration Objects’ OPC Server Toolkit is an easy to use OPC library. It allows developers to quickly create OPC DA, DX and HDA servers software. In fact, it supports the OPC Data Access, OPC Data eXchange and OPC. Opc ua client c# free download. IndigoSCADA Open source SCADA running on Windows. Development environment ANSI C/C98. HMI is based on Qt libra.
- Free C Opc Clientnewfamous Benefits
- C# Opc Client
As OPC UA provide the standards, which is accepted across the industries. I want to create a POC for simple OPC UA client and server using c#.
can some one please redirect me to the right code implementation? It will be a great help in order to understand the standards when starts with a simple examples.
Regards, The book of the beerejected scriptures.
The OPC Foundation has a set of open source repositories on github: https://github.com/OPCFoundation/.
The repository for .NET (https://github.com/OPCFoundation/UA-.NET) includes a number of samples.
Currently Browsing this Page:
Free C Opc Clientnewfamous Benefits
The code samples provided in this easy download will help developers who are interested in adding OPC client capabilities to their Visual Basic and Visual C++ applications. VB OPC client code follows the code structure found in the OPC Automation Interface 2.0 specification.
Portland, ME - March 19, 2007 - Portland, Maine; Kepware Technologies announces the release of an easier download and install of their KEPServerEX OPC server with Simulation driver as well as free sample code including OPC client code for VB.NET.The code samples provided in this easy download will help developers who are interested in adding OPC client capabilities to their Visual Basic and Visual C++ applications. Kepware’s simple VB OPC client code follows the code structure found in the OPC Automation Interface 2.0 specification. Kepware’s code is well commented, demonstrates how to perform basic connectivity to a single OPC server and provides additional recommendations for expansion of the application. According to Kepware’s Technical Support Manager Fred Loveless, “The code examples, with their internal documentation, provide very detailed explanations of commands needed for OPC connectivity. Although we’re happy to assist our customers this greatly reduces the support time required to get them up and running with custom applications.” The complex OPC client code was written to provide a robust, full-featured model for OPC client enabling your Visual Basic applications. One of the common problems developers face is how to connect with multiple OPC servers, add multiple Groups with multiple OPC Items, and keep track of change events to maintain the high levels of performance that OPC can provide. The complex OPC client sample code provides these capabilities. Additionally, because no OPC client is complete without the capability to perform OPC Tag Browsing, the complex OPC client example implements a complete Tag Browsing interface with filtering.For automation professionals unfamiliar with Kepware, it is worth mentioning that much of this code has been available for several years and has been well received by companies creating custom applications. For automation engineers who already use KEPServerEX but are new to custom OPC client development, this example OPC client code is also provided as part of the full KEPServerEX product suite so simply refer to the KEPServerEX / Examples folder in the Kepware Products directory on your computer. Users may borrow freely from this code to create their own OPC client enabled applications. If our free sample code is helpful or if users feel we could add more information, please contact us with feedback. All sample code is created for use with the SimDemo project, a data-simulation tool included with Kepware’s OPC server, KEPServerEX. The Simulator driver for KEPServerEX supports a wide range of data types as well as Simulation Functions like Ramp, Random, Sine, and User Defined.ABOUT KEPWARE:
C# Opc Client
Kepware specializes in OPC and device communication technologies for the industrial automation market. Kepware’s genuine OPC software products are known worldwide for quality, reliability, and ease of use – making Kepware the best choice for your industrial connectivity needs. Kepware’s responsiveness to customer needs and strong partnerships with other leading automation suppliers, insure that your next application will be a success. Ask around and you’ll hear why automation professionals everywhere consider Kepware Technologies, Automation’s Best Friend! Learn More
Did you enjoy this great article?
Check out our free e-newsletters to read more great articles.
|
OPCFW_CODE
|
The essence of readable code
Readable code is a “never-get-old” concept that every developer knows by heart that if they would like to move forward in their career, it is needed to practice clean code writing. Like Martin Fowler has said: “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”
How quickly the code is written isn’t important, it is the level of the code readability that matters. Rational, simple and consistent code draws out plenty of benefits. Practice of clean and clear code brings good programming patterns, contributing to easier testing and maintenance process. This all leads to cost-savings and increase the overall value of creating and maintaining software.
Now it’s time to deep into some good tips to deliver readable code.
We always held the belief that commenting our code plays a significant role in assisting other developers in the team to understand what we did, and no one will turn away in terror while doing maintenance. Yes, it’s undeniable that commenting is crucial, however, if the code is really readable, it shouldn’t have a huge amount of comments. Instead, we should consider style and naming before thinking about comment.
Style can vary but needs to stick to at least some consistent rules including indentation and spacing. Try to think like how easy you can read newspaper because the text is divided into multiple columns with short but complete sentences. If you want your code to be simple and comprehensive, here is the golden rule: “Don’t exceed 80 characters on a single line of code.”
Also, always align your code. It’s small detail but creates big difference!
Is better than
Remember the DRY rule. DRY equals to “Don’t Repeat Yourself”. Don’t rewrite your code again and again, instead, put it in variables and functions. This makes your code easy to understand and update.
Expressive variable name doesn’t make your code more readable. Good names should reflect and explain their content at a glance. In other words, we should be able to see what is contained in the variable, function, class, object, method or property just by looking at the name. Don’t be afraid of long names because modern programming languages and databases support names of more than 50 characters. To improve readability, try to avoid abbreviation. Also, avoid naming variables var, foo, x or mv. All of these contribute to the concept of “self-documenting code” which is very popular nowadays.
Code structure is a wide concept including of data structure, abstraction, control structure and the use of temporary variables. But keep in mind that you need to take care of small things first before you can handle big things. The most practical way is to document with code instead of comments.
Above comments can be changed into functions:
Functional programming is extremely useful in terms of making code short but effective but we don’t dig into it in this short guide.
Finally, we can talk about comments after mentioning it few times in this guide. We all undoubtedly know that comments are very helpful in explaining and reporting what we are doing. However, it is unnecessary and time-consuming to list all the details that we already knew. Comments should only state what the expected result is and why it is like that.
To conclude, these are just few but important tips for you to write better code. Through the time along with practicing, you would be able to write readable code. This benefits your code base growth as well as help you refactor your existing code at ease.
Bridgman, D 2015, ‘Writing highly readable code’, Medium The Startup, accessed March 2019 from url:
Janeen, N 2019, ‘What Is Readable Code?’, Medium, accessed March 2019 from url:
Huford, P, ‘How do you write readable code?: 13 Principles’, accessed March 2019 from url:
|
OPCFW_CODE
|
edit: corrected my code a bit, so it looks more like the actual vhost.conf file apache virtualhost share|improve this question edited Mar 25 '11 at 1:06 asked Mar 25 '11 at Subscribed! Edit: Virtual host configuration as understood by apache: [email protected]:/usr/local$ apache2ctl -S VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server radiofreebrighton.org.uk (/etc/apache2/sites-enabled/radiofreebrighton.org.uk:1) port 80 namevhost radiofreebrighton.org.uk (/etc/apache2/sites-enabled/radiofreebrighton.org.uk:1) Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). this contact form
ServerName and ServerAlias: Let Apache know the domain to match to this virtual host by setting the ServerName. Don't just post commands or configuration artefacts. share|improve this answer edited Aug 14 '15 at 7:01 muru 71.2k13130181 answered Apr 19 '14 at 15:59 Victor Bocharsky 2801311 I've not had to do that any time before The only thing that this configuration will not work for, in fact, is when you are serving different content based on differing IP addresses or ports. https://httpd.apache.org/docs/2.4/vhosts/examples.html
up vote 53 down vote favorite 15 If I have 3 domains, domain1.com, domain2.com, and domain3.com, is it possible to set up a default virtual host to domains not listed? I'm using Apache 2.4 on CentOS 7. –Basil Musa Dec 31 '15 at 11:18 add a comment| up vote 3 down vote An alternative setting is to have the default virtual Solution: Check for incompatible settings and modules in your Apache configuration files. I'd suggest to put the config in the 000-default.conf file. –Birla Jul 7 '14 at 6:33 1 Didnt work for me until I place the port number 80.
The Apache error logs usually show which directory or file has the permissions set incorrectly. You might see this error if the VirtualHost IP address or port doesn’t match the IP address or port of the web server’s NameVirtualHost directive. Then you create a virtual host with the server_name or ServerName specified as blog.domain1.com and configure it to point to the blog files and folders in your public_html folder. _default_ Virtualhost Overlap On Port 80, The First Has Precedence Remove those unnecessary comments an get to where your new virtual host looks like this:
For example: tail /var/log/apache2/error\_log You can see new entries as they are added to the error log, or any log, while you test the server if you instruct the tail command Apache Virtual Host Not Working Options -Indexes:: -Indexes stops people from being able to go to a directory and see files listed in there. This error often occurs when Apache virtual hosts are first created because the default NameVirtualHost directive is commented out with a hash symbol. http://serverfault.com/questions/288284/why-might-apache-ignore-a-virtual-host-with-a-servername-matching-the-requested This article is not a comprehensive guide to updating from Apache 2.2 to 2.4.
Is this possible? Apache 2.4 Virtualhost More Information You may wish to consult the following resources for additional information on this topic. Default Virtual Host If your Apache configuration file is replaced during the upgrade, the location of your default virtual host will change from /var/www to /var/www/html. This is how Laravel works by default.
Why is (a % 256) different than (a & 0xFF)? http://askubuntu.com/questions/450722/why-apache-virtual-hosts-on-ubuntu-14-04-is-not-working US Election results 2016: What went wrong with prediction models? Virtual Host Apache Some servers still install 2.2, however some install the newer 2.4. Apache Namevirtualhost ServerRoot /etc/httpd NameVirtualHost \*:80
Change yours as needed. http://pgelections.com/virtual-host/apache-virtualhost-default-not-working.html It has to be first to be default. –Ryan Jun 11 '12 at 15:14 Do you know which one comes first, httpd.conf or conf.d/xyz.conf? –Esa Varemo Sep 16 '12 Line reports whether the configuration syntax is correct, although that doesn’t necessarily mean your site is working The following output was produced by following virtual host file configuration: NameVirtualHost \*:80 The Order, Allow lines are a 2.2 syntax. –flickerfly May 28 '15 at 17:33 add a comment| protected by Community♦ May 30 at 11:29 Thank you for your interest in this Apache Virtual Host Directory
Sold my Canon EOS 5D Mark II and buyer says images are not in focus How do we know that Kalendae is the first day of a month? June 28, 2014 6.9k views I have Setup two websites on my Ubuntu 12.04 x32 VPS, say abc.com and xyz.com. Now let's go over some useful configurations. navigate here Want more?
Why do most microwaves open from the right to the left? Virtual Host Apache Windows Furthermore, while ServerAlias * will indeed match anything, it may also override other virtual hosts defined later. These scenarios are those involving multiple web sites running on a single server, via name-based or IP-based virtual hosts.
If no Host: header is sent the client gets the information page from the primary host. Luckily, it's fairly easy to understand once you know the files to edit. Whether you are serving different domains or different subdomains of the same domain, the procedure is the same. Virtual Host Xampp asked 2 years ago viewed 72875 times active 1 year ago Linked 29 403 error after upgrading to apache2.4 0 Installing Jenkins on Ubuntu and mapping to domain Related 2Why dont
To fix this issue in a default Apache configuration file, verify that the NameVirtualHost *:80 directive is not commented out. The path for the DocumentRoot directive in the first virtual host starts with a slash but the second one doesn’t. Another way to verify this is to check the error log. http://pgelections.com/virtual-host/apache-vhost-servername-not-working.html All Distros: Permissions If you are utilizing access control rules within your virtual host files, you will need to follow these instructions to update your permissions for Apache 2.4.
The apache.org upgrade page is a good place to start when checking for incompatible modules. Restart Apache Before you can diagnose an issue, ensure that you have restarted Apache since the last time you made changes to your Apache configuration files: For Red Hat distributions use: The numbered lines are explained following the example. Otherwise, they are good to go.
You cannot add the Require all granted line in Apache 2.2, or you’ll get a 500 Internal Server Error. Alternate options for symbolic links (ln) Code ladder, Robbers How do you simplify a log with an exponent in the base? Ubuntu and Debian: Adding .conf Extensions Follow these instructions at any time to update your virtual host configuration files. I'm going to assume we'll make a server which will match the url http://myproject.192.168.33.10.xip.io.
Here are guides for CentOS, RedHat, FreeBSD and Arch. However, if you also know your server's IP address is 192.168.33.10, then you can edit your hosts file and add the entry 192.168.33.10 myproject.local, which informs it where to look when We love customer feedback. Reload Apache.
Can I sell a stock immediately? Serving the same content on different IP addresses (such as an internal and external address). Comments placed here should be pointed towards suggestions on improving the documentation or server, and may be removed again by our moderators if they are either implemented or considered invalid/off-topic. ie.
Outside of the network, the name server.example.com resolves to the external address (172.20.30.40), but inside the network, that same name resolves to the internal address (192.168.1.1). To provide as much backward compatibility as possible we create a primary vhost which returns a single page containing links with an URL prefix to the name-based virtual hosts.
Browse other questions tagged apache-2.2 virtualhost or ask your own question. You need more detail (what server if any is contacted, what is the error code...) –Law29 Nov 30 '15 at 22:11 add a comment| Did you find this question interesting? more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
|
OPCFW_CODE
|
This is a multi-part post:
- Part 1 (this article) establishes terminology (tasks, threads and processes and how they relate to concurrency and parallelism) and gives an overview of challenges faced in concurrent programming.
- Part 2 shows what can go wrong when using threads without synchronization and explains the role and effects of the Global Interpreter Lock (GIL) in Python.
- Part 3 (TODO) explains some common thread synchronization primitives, accompanied by Python examples.
- Part 4 (TODO) explains some common process synchronization primitives (inter-process communication mechanisms), accompanied by Python examples.
- Part 5 (TODO) tackles parallel algorithm design and performance evaluation.
I’m writing this for my frustrated past self, who couldn’t wrap her head around these concepts. Moreover, my future self will likely benefit as well (I’m inferring this by extrapolating my current self’s goldfish-grade memory). And last but not least, I’m also writing this post for anybody out there still struggling.
Tasks, processes and threads
First things first: let’s establish some terminology.
Not everybody agrees on the definition of a task, but this term is so ubiquitously used that it is worth mentioning. A task refers to a set of instructions that are executed. For the purpose of this post, tasks can be seen as roughly equivalent to functions of a computer program.
A thread is the smallest set of instructions that can be managed by a scheduler. At the operating system (OS) level, a scheduler assigns resources (e.g. CPUs) to perform tasks.
A process is an instance of a running program. A process has at least one thread. However, programs can spawn multiple processes (e.g. a webserver may have a master process and several worker processes). To complicate things further, it is also possible to launch multiple instances of the same program (e.g. your favorite text editor).
I will attempt to give a more intuitive understanding of these terms in the following figure:
In this example, we have a program that launches three processes. Processes 1, 2 and 3 have 2, 1, and 3 threads, respectively. Each thread runs a given task A through F.
There is an important distinction to consider: the threads of a given process all share that process’s address space, but come with their own stacks and registers. In other words:
- A thread has its own stack and registers
- A thread has access to the code, data and resources of its owner process
Therefore, in terms of resource sharing:
- Threads (of a given process) share the same resources. Care must be taken as to how threads access those resources. Thread synchronization primitives such as condition variables, semaphores, mutexes or barriers allow to control the way in which threads access shared resources and will be discussed in a future post.
- Processes do not share the same resources, by default. Process synchronization, also known as Inter-Process Communication (IPC), must be used if resource sharing is necessary. Some of the most common mechanisms for achieving IPC are signals, sockets, message queues, pipes and shared memory. IPC will be discussed in a future post.
It is worthwhile to note that thread creation is lightweight in comparison to spawning a new process. This is an added benefit to the fact that threads have access to shared resources in the address space of their owner process.
How are tasks executed with respect to one another? How are processes executed with respect to the CPUs? Here’s where the next part comes in, where we discuss concurrency vs parallelism.
Concurrency and parallelism
Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once. Not the same, but related.
He also goes on to say:
Concurrency is about structure, parallelism is about execution. Concurrency provides a way to structure a solution to solve a problem that may (but not necessarily) be parallelizable.
In operating system terms, the “things” Rob Pike refers to are tasks. Concurrency simply means that the operating system will schedule the tasks to run in an interleaved fashion, thus creating the illusion of them being executed at the same time:
Whereas parallelism means that tasks are run on actually distinct CPUs:
Make sure to also check out Jakob Jenkov’s post on concurrency vs parallelism to get a broader picture (and to see how he defines parallelism in a stricter sense than what I have conveyed here).
What is concurrency used for?
Concurrency is useful for two types of problems: I/O-bound and CPU-bound.
I/O-bound problems are affected by long input/output wait times. The resources involved may be files on a hard drive, peripheral devices, network requests, you name it. In the above diagram, red blocks show how much time is spent for I/O operations. When downloading files from the internet, for instance, an important speedup can be attained if we download concurrently instead of sequentially. The speedup comes from overlapping the I/O-bound wait times (the red blocks in the diagram). Therefore, concurrency (launching more threads) can improve I/O-bound problems.
For CPU-bound problems, on the other hand, the limiting factor is the CPU speed. These are generally computational problems. If such programs can be decomposed into independent tasks (with the typical example being matrix multiplication), then an important speedup can be attained if we throw more CPUs at the problem. Therefore, parallelism (launching more processes) can improve CPU-bound problems.
Challenges in concurrent programming
Writing a concurrent program is more difficult than writing its sequential version. There are many things to consider and account for. Often times, isolation testing is a nightmare. Here we will discuss some of the most common challenges.
A race condition leads to inconsistent results that stem from the order in which threads or processes act on some shared state.
For example, suppose the shared state is the string
"wolf". We have two threads, each prefixing the shared state with a different word: thread A prefixes the shared string with
"bad" and thread B prefixes it with
- If A runs before B, the shared state becomes
wolf => bad wolf => big bad wolf
- If B runs before A, the shared state becomes
wolf => big wolf => bad big wolf
We can try to isolate race conditions using
sleep() statements that will hopefully modify timing and execution order.
Race conditions occur because access to the shared state happens outside of synchronization mechanisms. A possible mitigation strategy is to use barriers (see the next post in the series on Thread synchronization primitives).
A deadlock occurs when several tasks are blocked indefinitely while holding a shared resource and while waiting for another one.
Deadlocks occur when the Coffman conditions below are satisfied simultaneously:
- Mutual exclusion: at least two shared resources are held without sharing them with other tasks.
- Hold and wait: a task that holds a resource is requesting another resource which is held by another task.
- No preempt: the task is responsible to release the resource voluntarily.
- Circular wait: each task is waiting for a resource that is held by another task, for all tasks involved up to the last one which is, in turn, waiting for a resource held by the first task.
A livelock is similar to a deadlock: it involves tasks that need at least two resources each, however none of them is blocked. Unlike a deadlock, tasks in a livelock are overly polite: they acquire a resource, they test whether another resource is available, they release the first resource if the second one is not available, wait for a given amount of time, then repeat the whole process all over again. If bad timing is involved, none of the tasks involved in a livelock can ever progress. The irony is that livelocks often occur while attempting to correct for deadlocks…
Resource starvation occurs when a task never acquires a resource it needs. It can usually be resolved by improving the scheduling algorithm such that tasks that has been waiting for a long time get assigned a higher priority.
Priority inversion occurs when a task with low priority holds a resource required by a task with high priority. This results in the low-priority task finishing before the high-priority task. It can also get more subtle than this, involving a task with medium priority that preempts the low priority task, thus indirectly blocking the high-priority task indefinitely. Several protocols can be used to avoid priority inversion, one of them being priority inheritance. This is how the Mars Pathfinder priority inversion bug from 1997 was fixed.
This post takes a bird’s eye view of concurrency by:
- Establishing some necessary terminology (task, thread, process, concurrency, parallelism)
- Taking a look at two classes of problems (I/O-bound and CPU-bound) and how they relate to concurrency
- Explaining some common pitfalls in concurrent programming (race conditions, deadlock, livelocks, starvation and priority inversion)
The next posts in this series will illustrate synchronization primitives (for threads and processes), list principles to keep in mind when designing concurrent programs, and show how to evaluate parallel implementations.
|
OPCFW_CODE
|
How can I be sure that a multi-bit-per-symbol encoding schema exists?
(I came up with this question when I try to understand bit rate and baud rate.)
Suppose I have some data to transfer. And the data is binary encoded as a data amount of N bits.
If I use 2 symbols to represent the binary data, which means one symbol for 0 and another symbol for 1, then I can only transfer 1 bit of data a time. And the effective bit rate is the same as baud rate.
If I use more than 2 symbols to represent the binary data, and each symbol represents multiple bits, then I can transfer the same N bits of effective data much faster. And the symbol change rate on the line (baud rate) is lower than the effective bit rate.
But how can I know such a multiple-bits-per-symbol encoding schema exists for a given data trunk?
ADD 1
I once had some difficulty in understanding the baud rate and bit rate. I think the difficulty comes from the first impression I got from pictures similar to below one:
The pic gives me an impression that what get physically transferred over the wire is the digit 0 and 1. And for each digit, a different voltage level is assigned. So there are always only 2 different signal/symbol/voltage types on the wire. And thus the bit rate is always the same as baud rate.
Now I think this pic just shows some effective result. It is symbol that is get transferred over the wire. The amount of possible symbols is determined by the physical nature of the channel/medium. When a symbol is transferred, the one or more bits it carries get transferred effectively. And how symbols represent bits is a mathematical agreement among the communication parties.
What type of data trunk? A very popular method is QAM, which is defined to have more than one bit per symbol.
What do you mean by "a given data trunk"? What would you consider an example of a "data trunk"?
@ThePhoton I mean an arbitrary piece of data, which is a stream of bit 0/1.
There are three elements of a carrier that you can modulate: the amplitude, the phase and the frequency.
A very popular digital modulation scheme uses one of four possible phases (QPSK). So it can convey two bits on each symbol.
Other often used digital modulation schemes use combinations of several amplitude and phases. For example, 16QAM can send one between 16 possible combinations of phase and amplitude. So each 16QAM symbol can convey four bits.
There are other digital modulation variations similar to those mentioned, like 8PSK or 64QAM, 256QAM, etc.
To be able to decode a multi-bit symbol you need rather complex receivers. So those multi-bit per symbol protocols need mechanisms for data synchronization, you have to analyze the path to know if the SNR is enough to differentiate each symbol, etc.
This is it really in a nutshell, I hope that it is clear as an introduction.
Thanks, I am learning EE and your answer is a good introduction.
You are welcome! Best of luck on your studies. I also learned EE once ago and real life examples often help us to grab new concepts. Don't hesitate to ask again and cheers, you have chosen a great career!
The number of bits per symbol does not need to be an integer, although this is most convenient (simplest to implement).
In the most general case, you treat your entire message as one big binary (base 2) number. If your channel has N states (symbols), you simply convert that number to base N and transmit it one digit at a time.
In a more practical implementation, you would break the message into fixed-length blocks, converting and transmitting one block at a time, possibly adding additional error detection and correction bits to each block.
So the answer to your fundamental question is that there is always a way to transmit digital data using an arbitrary number of symbols.
So, if I don't consider additional overhead bits, on a binary channel (I mean a channel with only 2 states/symbols), the baud rate is always the same as bit rate. Right?
Yes. You already stated that in the question.
The other answers probably answered your title request. But as a student and young engineer I struggled with symbol-vs-bit concepts.
If I use more than 2 symbols to represent the binary data, and each symbol represents multiple bits, then I can transfer the same N bits of effective data much faster.
Here is some insight that took me a long time to grasp well.
Listen to Claude Shannon himself at 5:12.
The fundamental answer is NOISE. The universe vibrates, and there is a "base noise level" that we simply cannot avoid in any electronic system.
Noise is unwanted energy from natural (and man-made) sources. Every resistor and every active component in your circuit(s) contribute to this unwanted energy in your communication channel.
Every symbol in your encoding scheme has a specific signal energy (measured in Joule) related to both the power and duration of the encoded symbol that competes with the unwanted energy (noise) in the same time slot.
If you encode only one bit per symbol, all the energy in that symbol represents that single bit. But, if you encode N states (log2(N) bits) onto each symbol, each bit effectively gets only a portion of the energy of the symbol.
On the other hand, the noise energy of each bit in a symbol does not divide.
This is the key point to grasp. One way to look at it is that all the noise energy in each symbol do battle with that single lonely bit that is encoded onto that specific symbol. Think about this carefully - bit energy divides, noise energy does not.
So, as you encode more bits onto each symbol, you effectively lower the ratio of energy-per-bit/noise-energy-per-bit.
Ultimately, due only to the presence of noise energy, each bandwidth-limited communication channel has a theoretical upper bit rate limit that is solely a function of the unwanted energy (noise and interference) in that channel.
To paraphrase: If it was not for noise, we would enjoy unlimited data rates on every single communication channel.
From the above, one might intuitively feel that it's always better to choose one symbol per bit, because then all the signal energy in that symbol can be assigned to battle the noise energy in that symbol.
This is not the case...
In fact, the opposite is true - simply because, by encoding more bits onto each symbol you effectively allow the symbol duration to be longer, and therefore the energy-per-bit decays slower than the noise-energy-per-bit until the limit is reached. This goes back to the fundamental insight that the signal energy in each symbol does battle with the noise energy in that symbol.
Consequently, modern encoding schemes encode multiple bits onto each symbol, resulting in an effective symbol duration that is much longer than a single bit duration.
The downside of more bits-per-symbol is the additional processing power and complexity required for both encoding and decoding of the bits.
The benefit of more complex encoding is the amazing high speed internet channels we daily use and enjoy at work, in our homes and on our phones.
Also don't forget GPS and deep space and Viterbi!
What you need to understand is that you can't just put "0"s and "1"s in the line. You have to encode it somehow and that's called modulation, which is part of the physical layer of any protocol.
So, you have a copper wire, or an optical fiber, or even and electromagnetic field and you have to somehow transmit bits to the other side. There are many ways to do that, but the basics apply: you usually have an actual physical quantity that can be measured in the other side, respectively for our cases: voltage level (or current), brightness (for each light wavelenght) and electromagnetic power.
In the transmitter side, you have to "translate" bits to those physical quantities. Note, however, that the ones I mentioned are continuous quantities: you can "put" 0, 0.5, 1, 5, 20 volts between a pair of wires. The receiver side will see those quantities in the other side of the wire pair (plus losses, interferences, noise...).
Anyway, think like this: if those quantities are continuous, I can divide it to mean more discrete states. Then, if 0 volts means the 0 bit and 1 volt means the 1 bit, I can get 0 volts to mean the bits 00, then 0.33 volts to mean the bits 01, then 0.67 volts to mean the bits 10 and 1 volt to mean bits 11. This way a single symbol, which is a single voltage measurement, can mean multiple bits. If you transmit 1 voltage level every 1/1000 of a second, you are transmitting 1000 symbols/s (baudrate) and 2000 bits/s (bitrate). If you want, you can keep dividing further, up to the point that your receiver will be confused by the noise and demodulate your bits with errors (Shannon limit).
The image above, for example, has a carrier and is called Amplitude Shift Keying (ASK) and is the digital equivalent of AM (like the AM radio), but there are many others like FSK, PSK, QAM, PWM, and many others.
But how can I know such a multiple-bits-per-symbol encoding schema exists for a given data trunk?
By reading the protocol specification. This really should be obvious.
Note that you need to know a lot more about a protocol to actually communicate than just the effective bit rate. There is issues of encoding, knowing when words, packets, etc stop and end, how the bit are encoded, packet wrapping, and many more. All of these should be spelled out in a protocol specification somewhere.
|
STACK_EXCHANGE
|
If you want to grow your mailing list as fast as possible, you are going to need to learn how to split test your optin offers. Thankfully, with Aweber, its really easy to do!
A Split Testing Case Study
As my regular readers will know, I have a large number of niche websites in the digital camera niche. One of the reasons that I have so many sites in one niche (called “grouping”) is that I wanted to be able to build a mailing list to generate additional passive income, however, I didn’t want to have to create a mailing list for each and every site.
By having a group of similar sites, it really wasn’t very hard to design my mailing list to be usable for all the sites. By doing this, I was also able to use a single optin form (more on that in a sec) for all the sites.
Being able to have a single optin form, just one list, and one set of auto-responders saved me a TON of time when it came to getting everything all set up and working.
Split Testing for Faster Growth
After I had my list growing for a while, I wanted to see if I could make it grow faster. To do that, I started split testing two different types of forms (back in Feb).
As you can see, in both situations, there was a marked difference in the S/D (ratio of optins per number of displays).
For example, on the lightbox, the better performing form got an additional 9 subscribers. On the sidebar form, which was displayed over twice as many times as the lightbox form (the lightbox is a fade in form that only shows once per visitor, whereas the sidebar form is shown continuously throughout their visit), the extra performance got me an additional 63 subscribers!
Using Aweber for Split Testing
As you can see from the image below, setting up split testing with Aweber is pretty darn easy. All you need is to have created two forms, then click the big green button and the wizard will walk you through the rest.
Once you have the split test setup, all you need to do is to past the code into a text box on the widgets area within the WP admin panel and you are done. The only additional thing that I’d suggest you do is to mark a task in your calendar to check the split test results at some point in the future.
Improving Your Split Testing Results
For me, I let the test run for a few months and then went back and changed the forms that were performing poorly. You might want to change the graphics on the form, the colors, text, the size, etc…but only change one thing at a time or you won’t know which change was helping you!
PS. Have a comment on this post? Please share it below. I read them all!
|
OPCFW_CODE
|
Separate names with a comma.
Discussion in 'TiVo Home Media Features & TiVoToGo' started by Yoav, Dec 10, 2008.
Okay, what is in 1.2b?
as the comments say, I've been regularly taking the latest svn of streambaby and pytivo, ffmpeg, x264, and lame. Since 1.1 is now released, the beta number just got bumped to 1.2b1. It's just a regular build with the latest code.
edit: oh I lied a little. The latest 1.2b code incorporates some new code that should now 'do the right thing' when you upgrade versions AND have 'launch at login' set.
I assumed... I was just being silly.
I had been running 1.1 and found the (something like) "install and restart" message after it had downloaded this build. All sorts of "wrong things" happened: it kept crashing on (pyTivoX) restart, and after manually downloading 1.2b1 and installing it, streaming wouldn't work, I couldn't empty the trash because of all the .jar files still active, etc.
So... I guess this 1.2b1 code will fix that, but thought I'd mention it Just In Case.
I forgot to mention how grateful I am for this delightful, elegant hack. I'm a late adopter of TiVo, just got here last summer, and just found this forum last week. We don't watch that much TV, but making that which we do watch more convenient, nearly fun, is very much appreciated.
So... thanks very much!
Ermm, actually, 1.2b1 shouldn't really fix any of that. It sounds like streambaby is still running instead of being reaped during the upgrade. The easiest thing to do is probably reboot, which will clean up everything, including errant processes. I haven't seen this happen before, and you're the first to report it, so I'm hoping 'something interesting' went on that caused it and this is hopefully a unique thing.. But if it's still going on, I'm gonna ask for your help debugging it
This may have been mentioned before, but I can't find the answer. Is there anyway using pyTivoX to transfer files and keep the sub-folders they are in? I want to transfer my home movies to my Tivo, but I want them all to show up in a "Home Movies" folder. When I use the Tivo to transfer the files, they all show up in the Now Playing list. I know that using the Tivo Desktop the only way to do it is to setup Auto-Transfers. Any way to do it with pyTivoX?
That's a function of pyTiVo generally, not pyTiVoX specifically. You need to create a metadata file that includes a valid seriesID to get things in folders. There is information here about the metadata file. Here's a thread with some discussion about that issue from the pyTiVo forum.
In the update window, what does "Automatically download and install updates in the future" mean? I ask, since I have yet to see it actually do this, so figure I must be misunderstanding
Well, this is sparkle, so it does whatever sparkle claims it does
It generally only checks for new versions about once a day. I believe setting it to auto-download and install will make it just download it when a new version is available instead of prompting you -- but I havent tried it).
Just got MAK ability enabled on my Australian TiVo (don't ask about the price - we have no subscription but high upfront cost).
pyTiVoX seems to be working fine at first, but after a few minutes of transfer/playing (ie pytivo mode) or streaming (streambaby mode) my TiVo just restarts itself - all the way to the starting up graphic (then the "it'll only be a few more minutes").
Sounds like an issue with the tivo software. Are you guys running the same release as we are in the US?
No, they aren't. I've had a few Australians show up on my Reversi game, and they're running "11.1" rather than 11.0b. Which should not be read as them being ahead -- I think their version is crippled, and not (yet?) allowing TTCB. Although I'm surprised to hear that it does work for a few minutes. I dunno, they have some strange policies.
Is TTCB what TiVo Desktop uses to re-encode shows to send to TiVo for playback? If this is the case then yes this just got enabled in our Home Networking Pack (enables TTG, Music, Photos etc).
As for policies how it is sold here is different:
Upfront one-off cost - no TiVo subscription, but this is for approved apps (weather, games etc), EPG & broadband movie service.
MAK was turned off (so no external apps, multi room etc). This has been enabled now for a fee for your account (ie one off payment to enable, does so for all TiVo's on your TiVo account).
Whilst it's strange, the good thing is there are no ongoing subscription costs.
TTCB is short for Tivo To Come Back which just means transferring shows from computer back to Tivo.
i.e. The opposite of TTG = Tivo To Go which means transferring shows from Tivo to computer.
Until fairly recently one could only transfer mpeg2 program streams back to Tivo so any other kind of video needed to be transcoded to mpeg2 (which can be done on the fly) before being transferred. Series 3 Tivos (at least the ones sold in USA) now do have ability to natively store videos in a limited number of other formats as well. This Wiki page summarizes what Series 3 Tivos can natively decode:
(The most useful other native format other than mpeg2 is mpeg4 container with H.264 video and AAC or AC3 audio)
So, scrytch, do you get the same results with TiVo Desktop?
I just started using pytivox (thanks yoav for the gui and the developers for the underlying code) and was wondering about the ability to "natively store" other-format videos on my TivoHD.
Does that mean I can just upload a mp4 file encoded in h264 with AC3 audio? I haven't found a way to do that... Are we still talking streaming? Just a little confused.
It's probably not well documented or widely known at this point but it was discovered that via Tivo Desktop Plus auto pushes to series 3 Tivos of some mp4 files were not being transcoded to mpeg2 and that eventually led to confirmation and integration of that capability into pyTivo. See this thread for the whole sequence of events:
Summary of how to setup for mp4 pushes to your Tivo(s):
* Install wmcbrine's pyTivo fork (Just grab the latest zip file and unpack it somewhere)
* Fire it up (double-click on pyTivo.py) and then with a browser connect to http://localhost:9032
* In Web Configuration section under Global Server Settings set tivo_username and tivo_password to what you use for logging into Tivo web page
* Add a new videos section where your mp4 videos (H.264 + AAC or AC3 audio) reside and save changes
* Stop pyTivo and start it again (may not be necessary but just to be sure)
* Connect again to http://localhost:9032 and click on your video shares name
* Now select an mp4 video and Tivo to push to and click on the appropriate Send To Tivo button
* Leave pyTivo running and wait a few seconds (maybe minutes) and you will notice some pyTivo console activity and a blue light on the Tivo you are sending to light up and the transfer begins. You will also note no transcoding happens if you pick a compatible mp4 video.
(The details may not be 100% precise and I may have forgotten something but I think that should be enough to get you going). It would probably be good to have a detailed and accurate step by step reference page for this saved somewhere for easy reference.
moyekj answered this, but in the interest of saving you some work:
pyTivoX ships with wmcbrine's pyTivo. You just need to provide extra information to the config to enable push. The web interface is enabled.
However, if you ever hit the 'apply' button on the gui, it will lose all the configurations you made via the web gui. So, you should probably do something like 'run pyTivoX, set up all your shares, decide if you want 'launch at login', and hit 'apply'. Then do the web configuration to enable push. From that point on you should never need to hit the apply button.
Thanks Yoav. I thought I remembered you used a different version of pytivo, but in all my "catchup" reading I guess you switched... Good. GUI's and I are better speaking terms.
So, setup the shares, streambaby checkbox etc... hit apply THEN do the web config....
You probably addressed it elsewhere, but any possibility of adding .mp4 functionality to the pytivox interface? KISS - I know, and the mp4 support seems like a very recent addition for Tivo, but it sure would be nice to archive HD content in H264 via iTivo and handbrake (using a relatively HIGH quality with ac3 support AND comskip) and have the ability to stream that back to the Tivo (or transfer) at a much quicker speed than the native mpeg-2.
|
OPCFW_CODE
|
XML::DOM::DOMImplementation − Information about XML::DOM implementation
The DOMImplementation interface provides a number of methods for performing operations that are independent of any particular instance of the document object model.
The DOM Level 1 does not specify a way of creating a document instance, and hence document creation is an operation specific to an implementation. Future Levels of the DOM specification are expected to provide methods for creating documents directly.
hasFeature (feature, version)
Returns 1 if and only if feature equals " XML " and version equals "1.0".
More Linux Commands
Tk_CreateWindowFromPath(3) - create or delete window........
The procedures Tk_CreateWindow, Tk_CreateAnonymousWindow, and Tk_CreateWindowFromPath are used to create new windows for use in Tk-based applications. Each of t
htdbm2(1) - Manipulate DBM password databases (Man Page)....
htdbm is used to manipulate the DBM format files used to store usernames and password for basic authentication of HTTP users via mod_authn_dbm. See the dbmmanag
XcmsQueryColor(3) - obtain color values - Linux manual page
The XcmsQueryColor function obtains the RGB value for the pixel value in the pixel member of the specified XcmsColor structure and then converts the value to th
interp(n) - Create and manipulate Tcl interpreters (ManPage)
This command makes it possible to create one or more new Tcl interpreters that co-exist with the creating interpreter in the same application. The creating inte
DBI::Gofer::Serializer::Base(3pm) - base class for Gofer ser
DBI::Gofer::Serializer::* classes implement a very minimal subset of the Data::Serializer API . Gofer serializers are expected to be very fast and are not requi
gimp-console-2.6(1) - an image manipulation and paint progra
GIMP is the GNU Image Manipulation Program. It is used to edit and manipulate images. It can load and save a variety of image formats and can be used to convert
find(n) - search for classes and objects - Linux man page...
The find command is used to find classes and objects that are available in the current interpreter. Classes and objects are reported first in the active namespa
XLockDisplay(3) - multi-threading support - Linux man page
The XInitThreads function initializes Xlib support for concurrent threads. This function must be the first Xlib function a multi-threaded program calls, and it
xterm(1) - terminal emulator for X - Linux manual page......
The xterm program is a terminal emulator for the X Window System. It provides DEC VT102/VT220 (VTxxx) and Tektronix 4014 compatible terminals for programs that
ncurses5-config(1) - helper script for ncurses libraries....
ncurses5-config.1 - This is a shell script which simplifies configuring applications against a particular set of ncurses libraries. OPTIONS --prefix echos the p
autofs.conf(5) autofs configuration - Linux manual page.....
Configuration settings used by automount(8) may be changed in the configuration file /etc/autofs.conf. This file contains two primary sections, autofs and amd.
syslinux2ansi(1) converts a syslinux-format screen to pc-ans
Syslinux2ansi is a filter which converts a screen formatted for syslinux to one compatible with PC ANSI. It will only read from standard in, and has no command
|
OPCFW_CODE
|
namespace UniversitySystem.Web.Infrastructure.Extensions
{
using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.Extensions.DependencyInjection;
using Helpers;
public static class ServiceCollectionExtensions
{
/// <summary>
/// Register all types, which located in assemblies which contain assembliesContainName and implement interface TDependency
/// </summary>
/// <typeparam name="TDependency">Interface, which should be implemented by type</typeparam>
/// <param name="services">Service collection</param>
/// <param name="assembliesContainName">Assemblies name which should be contained</param>
public static void RegisterAllNonGenericDependenciesWhichImplement<TDependency>(this IServiceCollection services, string assembliesContainName)
{
var types = TypeHelpers.GetAllTypeForAllUsedAssemblyContainName(assembliesContainName)
.Where(t => t.IsClass && t.GetInterfaces().Contains(typeof(TDependency)));
foreach (var type in types.Where(t => !t.IsGenericType))
{
foreach (var serviceType in type.GetInterfaces().Where(x => x.GetInterfaces().Any(t => t == typeof(TDependency))))
{
services.AddTransient(serviceType, type);
}
}
}
/// <summary>
/// Register all generic dependencies. Workaround for .NET core issue https://github.com/aspnet/Home/issues/2341
/// </summary>
/// <param name="services">Service collection</param>
/// <param name="genericTypes">Dictionary -> (serviceType -> implementedType) -> All type, which can be generic</param>
public static void RegisterGenericDependencies(this IServiceCollection services, IDictionary<KeyValuePair<Type, Type>, Type[]> genericTypes)
{
if (genericTypes == null)
{
return;
}
foreach (var types in genericTypes)
{
var serviceType = types.Key.Key;
var implementedType = types.Key.Value;
var possibleGenericTypes = types.Value;
if (possibleGenericTypes == null)
{
continue;
}
foreach (var possibleGenericType in possibleGenericTypes)
{
services.AddTransient(serviceType.MakeGenericType(possibleGenericType), implementedType.MakeGenericType(possibleGenericType));
}
}
}
}
}
|
STACK_EDU
|
LOOK.education is our flagship technology product at Nehloo Interactive. We’re on our accelerated path to the top, that’s why we welcome you to apply to join us, if you love emerging technologies and visual learning and education. We’ve developed a software platform core, with virtually an infinite applications in education and training. From students, apprentices, trainees, to companies employees, organizations members, government staff, and even further to families, celebrities, astronauts (yes, we’re a NASA semifinalist) and so on, LOOK creates visual excitement and enhances the teaching and learning for a wide variety of people and organizations.
You should have the expertise and passion to exercise and grow your skills and abilities, to perform the following job duties:
● Architect, design, and build outstanding mobile web applications that include the latest technologies such as embedded media, VR, AR, and responsive UI/UX.
● Create and maintain algorithms or scripts to support engineering optimization of 3D objects and scenery (AR), or photorealistic spherical ocular viewed videos (VR), for web browsers or mobile devices.
● Produce interactive AR and VR experiences viewable on mobile devices or web browsers (iOS-based, Android-based, Microsoft-based etc.)
● Effectively debug, troubleshoot and optimize code. Find and fix security issues.
● Cross-browser/platform testing to ensure code functions properly on all devices.
● Create and maintain internal tools, frameworks and dashboards that drive company
● Implement effective digital analytics instrumentation strategy to capture web and mobile customer clickstream data.
● Collaborate across organization boundaries to develop software solutions that solve
important customers' problems.
● Keep up-to-date on new technology, standards, protocols and tools in areas relevant to rapid changing digital environment.
● Ideally, maintain, modify and create native applications and code for iOS, Android etc.
● Apply layers of interactivity to UI elements, to engage or evaluate diverse types of users with various abilities and/or disabilities.
You have to have these following qualifications, to successfully operate in this role:
● Minimum High School diploma (required), or relevant BS/Associate degree a plus.
● 2+ years of experience in mobile web development (will accept outstanding high school or college project work).
● High sense of urgency, motivation and positivity, strong work ethic and punctuality.
● Knowledge of WebGL, WebVR.
● Experience with software development processes including source control, bug tracking, and design documentation.
● Actively contribute to brainstorms and conceptual meetings.
● Effectively manage work schedule and deadlines across concurrent projects.
● Strong communication and collaboration skills.
● Clearly articulate work progress and required assistance to get things done.
● Ideally, experience with effective use of technology for educational purposes, web-based VR/AR embedding solutions, and gaming and gamifying technologies.
● Ideally, experience with digital physics, optics, computer vision, robotics, machine
learning (ML), artificial intelligence (AI).
● Ideally, expert with AWS instances management and Linux.
● Motivated in advancing a career in emerging mobile web technologies and VR/AR.
You will experience:
● Exciting projects and work.
● Workplace flexibility.
● A cool team to grow with.
This is not a complete listing of the job duties. This role at parent company Nehloo Interactive creates exciting assignments and it is a great opportunity to work at a technology startup focused on next-generation education and training, engaging in small and large projects for various industries.
Nehloo Interactive is committed to creating a diverse environment and being an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, veteran status, or any other protected status with respect to recruitment, hiring, promotion and other terms and conditions of employment.
Interested candidates should apply to email@example.com for consideration.Posted On
January 22, 2019
|
OPCFW_CODE
|
I've been trying to create a personnel database in an object with an option to retrieve random 3 persons/values. I am quite content with repeating the same code three times, since I most probably won't be needing to pull more than 3 values at once, but I just can't get over the fact I have no idea how to make that work in a loop, the way I want it to work.
As far as I can get is this:
<<set $Database = ["Person1", "Person2", "Person3", "Person4"]>>
<<for $i to 0; $i lte 3; $i++>>
<<set $Random = (0, 3)>>
<<set $RetrievedPerson1 = $Database[$Random]>> <--Here is where my problem starts. I have no idea how to format the "Retrieved Person" name the way for the loop to increase the number with each loop iteration. I am guessing it can be done easily, but I couldn't find an answer. I don't even know how to properly formulate the question.
I would appreciate any suggestions.
It also would be good to know how to put an object within an object in Twine and how to recover that values, but I guess I can figure out a way around that.
Sorry for my english.
So, is this SugarCube 2 or SugarCube 1? Please say SugarCube 2.
I assume what you're trying to articulate is that you want to programmatically create $variable names, which can be done, though it requires slightly deeper knowledge of the inner workings of SugarCube. It would be simpler, though somewhat more cumbersome, to simply switch on the value of your loop index ($i) via the <<if>> macro.
That said, if you're only going to be pulling small numbers of items, why do you need a loop in the first place? You can simply pull the requsite number of members, no need for a loop.
No Loop Examples So, to retrieve three members from $Database, and if duplicates (i.e. the same random value multiple times; which would also be an issue using a loop) are okay, you could simply do something like the following:
If duplicates are not okay, you want unique pulls each time, then the simplest thing you can do is to remove each member as it's chosen. You can achieve this with the <Array>.pluck() method.
For example, if you are setting up $Database every time (i.e. it's okay to remove members from it), then you can do the following:
On the other hand, if $Database is something you setup once (i.e. it's not okay to remove members from it), then you can make a copy and pluck members from that:
Now that I've covered how you should probably do it, here's how you could use a loop to do by programmatically accessing the $variable names: Again, if $Database should not be modified, then pluck from a copy, like so:
You used "object" before to refer to an Array (which is technically true, Arrays are objects), but most of the time when you say "object" we're going to assume you mean Objects (generic objects). So, are you talking about Arrays or Objects?
Here are some examples (no commentary though):
Thanks for this very useful answer.
However, I'm having trouble with either datastructure, or syntax after using the insight I got from your post.
I have a $map variable with locations nested in it, each location being an object containing stuff. I could not set-up $map as an array containing objects, because Twine then says there's an error ("Error: <<set>>: bad evaluation: Unexpected token").
I changed $map to an object, and then it works.
However, I'm trying to build a loop to print all this, location by location, which seems impossible to do with the object structure, unless there's a way to reference each Location object in the Map object by an index value (like an array has), which i'm not sure there is for an object.
Here is my map thingy set up:
And the loop I originally set up when I thought I could nest objects inside arrays:
Is there any solution to this issue (either a way to reference an object's index if there is one, or nesting objects inside an array) ?
For what you're doing, it probably makes more sense to use an object containing objects. A loop to print all of the locations using such a structure could look something like the following:
Assuming that this is for debugging or something, here's a somewhat more nicely formatted version using tables:
This may very well be why it didn't work?
Anywhay, thanks a lot for your code, it works very well and now I just have to understand it ;p
So I have some questions about it :
What does the function Object.keys($variable) do exactly?
What do the pipes | do?
what's |h and |! do?
What do the \ do ?
Put simply, it returns an array of the given object's own property names (a.k.a. keys). Objects consist of key/value pairs. For example: In the case of Object.keys($map), it returns an array of the names of each object you've stored in $map (e.g. tavern and taverncellar).
As I noted with the example, they're part of the table markup. Specifically, the TiddlyWiki table markup (I could have used HTML table markup instead).
SugarCube's markup language (docs: for SugarCube 2.x or for SugarCube 1.x) is based on TiddlyWiki markup, though there are some differences.
It's the line continuation character (see: Line Continuations, for SugarCube 2.x or for SugarCube 1.x).
|
OPCFW_CODE
|
If this isn’t the one you are looking for, try the next
This is a rather basic searching algorithm, but it is actually useful in practice when you just need to find an element in an array — and you suspect the element is towards the beginning of the array (or just don’t want to think too hard)
Difficulty: Beginner | Easy | Normal | Challenging
Iteration: Each time a set of instructions is executed in a loop is an iteration
Linear Search: An algorithm to look for a target within an array, starting with the first item and moving through the next one in turn until the element is found
Why do we need to search
To find an item on a computer, we need to think like a computer. Each time a computer checks to see if the item is the one it wants, that is a comparison.
And here is the thing: Comparisons cost time.
Now computers are inherently ordered things. This might be surprising, but essentially what I’m saying is that computers aren’t completely disorganised.
If you have a large collection of numbers (and let us use numbers as examples of searching) you would put them together in an array to find a specific entry.
Imagine you want to store the age of a set of people to do some clever mathematical modelling. Perhaps there is some reason that you want to ensure that there is someone who is exactly 40 years old.
Linear Search is a possible
Algorithm that can be used in order to solve this problem.
So imagine we have 5 numbers stored in an array, in the order that they originally were created in.
These 5 numbers are 15, 2, 104, 112 and 3 in this case. We will try to find 104 in the
As you can see we have placed the numbers into the array. We will then traverse the
Array to try to find out whether 104 is present.
There is something going on here though. At step 3 of the
Algorithm above we have found our element, yet we have continued for two more
Iterations of the
This is quite wasteful — if we think about this we can actually just stop when we’ve found the element we are looking for.
So we can explain this
Linear Search Algorithm with a flowchart, with the slight efficiency improvement to stop when the item is found already added.
This is also represented by the following pseudocode
Why Linear Search Isn’t Great
Linear search compares each and every element in turn. This means that you potentially need to check each and every element in your list before you find the target! Because there are n elements in the list, potentially you have to make n comparisons to find the target.
We say in the worst case there are n comparisons, so the algorithm is O(n).
Here we are measuring the efficiency of the algorithm by looking at the worst case. The worst case is that the item we are looking for is right at the end of the array.
You should know that the worst case scenario happens more often than you think.
When we look for an item in an array or other data structure we do rather need to think of the efficiency of our
Algorithm. If you wanted to watch a particular movie on Netflix (say Mortdecai or entourage) you would not want the algorithm to look through every possible film on the service before telling you if it is there or not.
A divide a conquer algorithm like Binary Search (Explained in the Swift programming language HERE) can be a much better choice for such a use.
You can argue, though, that in some cases (like a small
array) going through a simple algorithm like
Linear Search makes sense.
As with much in programming, it is up to the programmer to decide. The best course of action is that which you choose, so it is best to at least attempt to choose wisely.
The Twitter contact:
Any questions? You can get in touch with me here
|
OPCFW_CODE
|
Boskernovel Fey Evolution Merchant online – Chapter 421 – Huge Axe Mercenaries calm pale quote-p2
stories of old time gods and heroes
Novel–Fey Evolution Merchant–Fey Evolution Merchant
Chapter 421 – Huge Axe Mercenaries political remind
In this situation, regardless of whether Lin Yuan gave Zhou Luo a Bronze/Epic fey, Zhou Luo would still demand Lin Yuan’s expense if he wished to become stronger. Lin Yuan would not do a really burning off package.
Zhou Luo acquired always experienced that he didn’t get the smallest sensation of life, but he didn’t expect to be known as through the youth along with the unusual mask now.
As soon as Zhou Luo summoned his Iron Bone tissue Iguana, Lin Yuan’s eye were repaired on its the neck and throat. It wasn’t that he or she deliberately stared at its neck. Somewhat, the sarcoma was too apparent, as it was one-quarter of the go.
The one thing that Zhou Luo obtained developed in his details was this Platinum IX/Imagination I Steel Bone Iguana were built with a mutation at its neck area which would have an affect on some human body moves, therefore influencing its strength with a specific extent.
Simply by hearing the phrase ‘mercenaries’, it turned out crystal clear the fact that role with this rising faction would be to be employed to accomplish objectives and get items. It was actually a escalating faction that trusted martial compel.
was called the promised land by the hebrews
What has it have concerning me whether your Massive Axe Mercenaries possess any pinnacle california king-cla.s.s specialists?
The Dispatch Carrier and Memoirs of Andersonville Prison
In cases like this, even if Lin Yuan gifted Zhou Luo a Bronze/Epic fey, Zhou Luo would still demand Lin Yuan’s investment decision if he planned to become stronger. Lin Yuan would not do a real dropping package.
This created Lin Yuan suddenly consider a dish—potatoes within a force cooker.
This kind of mercenary factions that compiled california king-cla.s.s experts possessed very fierce compet.i.tion since they let their toughness chat for the children, as a result it was apparent at a glance which faction possessed superior business.
Zhou Luo, who was sitting on the recliner, felt a little confused. Why is robust male focusing me all over again?
If the potency of Zhou Luo’s principal fey, the Metal Bone Iguana, was seriously impacted by the mutation within the throat, Zhou Luo will be a load to Lin Yuan.
Zhou Luo clearly fully understood he never had the requirements to generally be fussy as he wanted to enroll in a faction. He could only quietly await other folks to decide on him.
Zhou Luo possessed always felt that he didn’t hold the tiniest a sense of life, but he didn’t anticipate to be referred to as because of the youngsters together with the odd mask now.
What has it acquired to do with me whether your Significant Axe Mercenaries have any pinnacle emperor-cla.s.s pros?
Just by listening to the saying ‘mercenaries’, it was subsequently clear that this position on this rising faction would be to be selected to complete quests and acquire products. It was actually a escalating faction that used martial pressure.
Just after praoclaiming that, Lin Yuan would not shell out any more vigor on Cold Ice cold, who was ranking at the side.
Zhou Luo was still not comfortable about the potency of his major fey, the Metal Bone Iguana. He required an in-depth inhale and said with a few apprehension, “Do you feel I meet the criteria with my power?”
When Very cold Cold listened to that, his term stiffened, and the man immediately grew to be hesitant.
“A couple of days in the past, my huge sibling was a pinnacle california king-cla.s.s pro. What exactly do you imply because they are a hair’s breadth from the like a pinnacle california king-cla.s.s pro?”
Zhou Luo, who had been on the office chair, believed a little confused. Why is that this sturdy mankind focusing me once again?
When Freezing Freezing heard that, his manifestation stiffened, and he immediately grew to become resistant.
Oh my G.o.d! This formidable male doesn’t possess any improper thoughts about me, does he!?
Translator: Atlas Studios Editor: Atlas Studios
animal city the domestication of america
Lin Yuan didn’t worry about the weakness of Zhou Luo’s primary fey, which he recollected had been a Platinum IX/Imagination I shield-sort Metal Bone fragments Iguana.
Quite as Zhou Luo obtained just complete talking, he didn’t be prepared to be immediately highly targeted via the high and strong guy beside him.
Zhou Luo, who was sitting on the chair, experienced slightly baffled. Why is robust male targeting me once again?
He explained, “Summon your main fey out in my opinion to view.”
bleach system within bleach español
He promptly answered, “Of study course, I’ve been aware of the Huge Axe Mercenaries. It’s an exceptionally powerful faction with 11 emperor-cla.s.s professionals, the most powerful in which is said to be basically a hair’s breadth from the like a pinnacle california king-cla.s.s expert.”
When Freezing Frosty heard that, his concept stiffened, and then he immediately grew to be resistant.
Lin Yuan turned his head and checked out the guy, who experienced are available in initial and sat on the desk chair, just before inquiring, “Zhou Luo, have you heard with the Large Axe Mercenaries?”
Liu Jie’s fists obtained clenched up, and that he glanced at Lin Yuan. His sight had been conveying your message: ‘I’ll blast him out in the event you agree’.
how to contact florida governor desantis
Lin Yuan nodded at his ideas. From your variety and strength of your ruler-cla.s.s industry experts, the massive Axe Mercenaries should be thought about a veteran and soaring faction.
It absolutely was now almost New Year’s plus in time for your Brilliance Federation’s S Tournament period. The Huge Axe Mercenaries probably could not receive any quests now, so Freezing Cold obtained started to show up to perform weird careers to be a associate.
Oh my G.o.d! This solid person doesn’t possess poor opinion of me, does he!?
|
OPCFW_CODE
|
ASP.NET Core how to change policy requirements based on user selection
In my Blazor webassembly project I can see all roles, delete and add new ones. I can also assign a role to a user and authenticate the user based on that.
Now I have for example my RolesController.cs:
// GET: api/Persons
[Authorize(Policy = "SeeAllRoles")]
[HttpGet]
public async Task<ActionResult<IEnumerable<IdentityRole>>> GetRoles()
{
return await _roleManager.Roles.ToListAsync();
}
// GET api/<RolesController>/5
[Authorize(Policy = "GetRole")]
[HttpGet("{id}")]
public async Task<ActionResult<IdentityRole>> GetRole(string id)
{
var role = await _roleManager.Roles.FirstOrDefaultAsync(r => r.Id == id);
if (role == null)
{
return NotFound();
}
return role;
}
Every controller has a unique policy for each action. So that I can restrict each action individually. Normally I would just define the policy in Startup.cs, add .RequireRole("Admin") or something to it and now every user in this role would have access to the policy I specified.
To the question: I want to have a list of roles, as I have now, with the ability to select which policy should be included in this role. So that I can login to my application go to Roles and add a new role NewsDisplay and only select policies necessary to view the news feed. Then I can add users for my various displays and put all of them in the newly created NewsDisplay role.
I guess this is possible but can´t find a sufficient solution to this. Would be nice if someone could give me examples, links, ideas.
You using ASP.NET Core Identity? It has everything you need. Rules as an entity and claims. A role is basically a bag of claims, same as a user can be a bag of claims. So you create a role named "Admin" and add claims to it which define admin, for the claims you have policies in your startup. Should be well documented already. The claims will be automatically added to the users IdentityClaims if he has a specific role. https://learn.microsoft.com/en-us/aspnet/core/security/authorization/claims?view=aspnetcore-3.1
I want to add policies to roles so that I don´t need to give each user 100 policies instead add the 100 policies to a role and give each user just that role.
Maybe that is possible with the claims you mentioned but currently I am not that familiar with claims so would be helpful if you could give me some examples other than the general Microsoft one.
Wait, roles also have claims. So you mean I can add my role, then when I select a policy in my UI wich this role should have I add the claim for the policy to the role, then in my startup.cs I check for that claim?
Yea, roles are "bags of claims" too, i.e. you could have a claim roles and roles.manage. The roles.manage could be added to the role named "admin" or "manager" where as the roles claim (read roles) could be added to the user role. When the user belongs to both "user" and "admin" role, both claims will be added to his property bag.
@Tseng thanks, tested it by adding claims to roles manually in the database and it worked. Do you know how I avoid claims being applied twice? Like when in your example roles are added to "admin" and roles.manage are added to "manager" now one user is in both roles "admin" and "manager", this user would have the claim roles two times.
|
STACK_EXCHANGE
|
Why are right angles important
I am interested if there is a concise mathematical way of expressing what is important about a right angle.
I am not so much asking for, say, a list of applications of right angles. Obviously they are used in endless situations and analyses. But my gut feeling is that there is something fundamental about the concept which might be able to be expressed concisely or even beautifully.
It may be that the answer I'm looking for is just the concept of orthogonality. But I'm not a mathematician and so I'm unsure if that is really a fundamental concept itself or actually a kind of result derived from something more basic.
Afaik, two vectors are orthogonal when their inner product is $0$, and this definition only depends on what inner product you consider. So orthogonality is just the zero of something, and it ends up simplifying a lot of relations.
For instance take a Cartesian coordinate systems. Imagine how much more complicated things would be if the axis were not perpendicular/orthogonal. The "0" thing just simplify everything (until you find a more powerful abstraction).
Right angles give us a convenient system of orthogonality, that helps us break down bigger things into components that can be analyzed independently. Think of how in physics when we calculate "work-done", we can neglect all components of a force which are orthogonal to the direction of displacement.
When we started the primitive business of measuring things, we encountered tons of objects which stood "perpendicularly" on the ground, in some loose sense of the word. Tall trees, hills, take your pick. People realized that taller objects create larger shadows, and at some point, this correlation led them to the question of this relation could be used to measure how tall things are.
If a $1$m stick creates a $10$cm shadow at afternoon, how tall is the huge tree which has a $10$m shadow at the same point of the day? This gave rise to trigonometry. People built homes, pyramids, so on and so forth using all these clever techniques.
From a definition standpoint, Euclid found a beautiful way to define right angles:
When a straight line intersects another straight line such that the adjacent angles are equal to one another, then the equal angles are called right angles and the lines are called perpendicular straight lines.
(definition taken from this website).
This definition is really a wonderfully efficient way to pin down the concept of a right angle! In order to think about angles, we need some concept of "the space between two lines," however we choose to interpret that. But once we have two lines (or rays, line segments, whatever) that intersect, the bare minimum requirement to talk about angles, we can define right angles using only the concept of equality, with no reference to numbers even.
I will not attempt to give a concise way of expressing what is important about a right angle. Rather I can give a famous example:
Right-angled triangles are important because of Pythagoras Theorem $a^2+b^2=c^2$. This includes famous problems about right-angled triangles in number theory, for example the congruent number problem, relating to elliptic curves, Fermat's last Theorem and the Birch-Swinnerton-Dyer conjecture.
|
STACK_EXCHANGE
|
Within the coming month, Halloween will be right around the corner! And with that, comes Mad King Thorn and holiday festivities. Do you think he will make his return in GW2? Do you want to see a Halloween celebration in GW2? Will ArenaNet be able to produce such an event with the game only being a month old? Thoughts? Opinions?
Posted 29 September 2012 - 12:42 AM
Posted 29 September 2012 - 12:51 AM
That's what I was thinking, but then I remembered... the GW1 launch was much smoother and less problematic than GW2… which is why I wonder if we really will see it.
Edited by Naima Omadara, 29 September 2012 - 12:54 AM.
Posted 29 September 2012 - 01:01 AM
Comparitively, GW1 had a lot less that could go wrong during release than GW2 does. There was another thread on this exact topic but I can't seem to find it so yeah. Someone in there said something about Anet confirming they were indeed working on halloween festivities, as to what they are, no one knows yet.
Posted 29 September 2012 - 01:05 AM
Posted 29 September 2012 - 01:10 AM
They did? I must have missed it. I really hope so... I can just imagine the possibilities of scale and content for a festival in GW2... it could really be amazing!
Posted 29 September 2012 - 01:27 AM
Also, costumes. Now only $9.99! Now you can look like random animals in silly outfits!
Posted 29 September 2012 - 01:29 AM
Given how cheesy the Mad King has always been, I'm rather curious as to who they'd chosen to voice this role. Considering some of the talent they've gotten so far, I'd expect it be someone fairly well known. J. K. Simmons and Jon St. John come to mind, but it's likely there's someone even better for it that I'm not thinking about.
Posted 29 September 2012 - 01:51 AM
Really, I just can't wait for Winersday.
Posted 29 September 2012 - 02:39 AM
Posted 29 September 2012 - 05:07 AM
Posted 29 September 2012 - 06:14 AM
Posted 29 September 2012 - 06:20 AM
The perfect person is Brian Blessed simply because I see the mad king shouting the answers ot his jokes all the time.
Posted 29 September 2012 - 04:42 PM
As I said before, I don't think Anet will change their precedent of working on events/goodies on their own time
Also tagged with one or more of these keywords: halloween, festival, events, holiday
News & Announcements →
Community Events & Services →
Zojja's Lab - Support →
Technical Support →
The Gathering - General Discussion →
Primordus Visions →
The Colosseum - PvP Discussion →
Stonemist Castle →
News & Announcements →
Community News & Projects →
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
SharePoint Online - Office 365 - Help to clearing up some issues and some inspiration/guiding
I have been employed by a company that uses Office 365 Enterprise E3
They are not completely satisfied with their SharePoint solution and for good reason.
Technically it works but the appearance looks more like a "Windows Explorer with a website". Users use the most as a FileShare and that's it. For example, make a document - upload it and sends an email around if there are others who would be interested in it.
I am not an expert in SharePoint but believes that SharePoint can do much more than that
The company is a consulting company with offices in several cities and are not divided traditionally structured as an organization (sales - marketing, economy , finance, development etc) but more as independent offices in different cities
They want to use SharePoint as a knowledge and collobaration center.
First, I think - aha they need an intranet with team sites. But is it possible with their chosen solution?
If so - will it be a starting point to base the team sites on individual cities (eg city 1 city 2, city 3 etc.) and then create a structure for a single city.?
Can you possibly. guide me to the examples / tutorials on building intranet team sites?
Which version of SharePoint, Office 365 features (is it SharePoint 2016?)
Can you develop new design / new apps / new structure?
Can you, with advantage, use the Office UI Fabric, or should developng by the hardcore use of Visual Studio?
What about Sharepoint Designer?
The company would like to use Yammer as a social medium.
Can you use Yammer as a news feed for SharePoint
If yes - Is there a guide / tutorial on integrating Yammer with Office 365/Sharepoint?
I do not have the opportunity to work on their server, as I am afraid to crashed it
I have therefore signed up for a trial (30 days) of their solution Office 365 Enterprise E3.
But get stucked because you obviously have to install Office 2016 on your computer.
But this is not possigle (for me) since I already bought and paid for an Office Professional Plus 2013. It should then be uninstalled first.
My computer is shared among multiple users (my family) who daily use Office and in principle I only need to use SharePoint.
Is it possible to try a trial of SharePoint Online that corresponds to the version included Office 365?
As mentioned earlier, I am not an expert in SharePoint and have only worked with SharePoint 2010 on-premises
Is there anyone who can help to clarify these issues
This question is really 5-10 individual question, many of which are primarly opinion-based. Therefore it will be closed
Ok if it has to many possible answers then it would be nice if you could answer some of the questions. If it has some opinion issues then dont answer them.
But to closed the question is not going to help me or others.. I just need some direction to get started to use Sharepoint Online in a proper way.
If this is not the rigth fora to ask these questins, maybe you could provide me with a URl for the right and helpfull discussion board?
Hi there, it is closed because it is not living up to the standards of this community, please take the "newbie" tour here to learn more http://sharepoint.stackexchange.com/tour . What you should do is to reask the questions that are not primarly opinion-based as new, separate questions to conform to the format och this community
|
STACK_EXCHANGE
|
This post is part of a series delving into the details of the JSR-352 (Java Batch) specification. Each post examines a very specific part of the specification and looks at how it works and how you might use it in a real batch application.
To start at the beginning, follow the link to the first post
This series is also available as a podcast on iTunes
, Google Play
, or use the link to the RSS feed
The next post in the series is here
Apologies to The Cascades this time (had to look that one up..).
This time we’re going to consider three Listeners you can implement around Chunk processing. We already talked about the Chunk Listener that gets control around the entire Chunk, but there are also listeners around the individual read, process, and write operations.
The Read Listener gets control before and after the
ItemReader reads an item to process. The after method gets passed the object that was returned by the reader which gives you an opportunity to look it over and do any post-reader processing you might need to do.
The Processor Listener gets control before and after the
ItemProcessor processes the item that was read. The before method gets the object returned by the reader, giving you a chance to do any pre-processing work. The after method gets both the object that was read and whatever object was returned by the processor. This gives you a chance to look at both the input and output from the processor.
Finally, the Writer Listener gets control before and after the
ItemWriter writes the list of items returned by the processor in this chunk. The before and after methods both get the list of items to write. This gives you a chance to do something with the list both before it gets written and afterwards.
But that’s not all! All three of these listeners also have a method that gets control if an exception is thrown from their respective artifact (so the
ItemReadListener has an
onReadError method that gets control for an exception from the
ItemReader, etc.). All the methods get passed the exception thrown. This means you can create an exception object and use it to communicate between the reader/processor/writer and the listener. The processor and writer listeners also get the item (or items) being processed/written.
Be careful though…the
onError methods are not catching the exception. They just get informed that it happened. Handling the exception is something we’ll get into in future posts.
So, what good is all this? Probably you’ve tried to create somewhat generic readers, processors, and writers that do specific tasks (read from this data source, do this processing, etc). The listeners around them give you a chance to smooth out the edges between them or do some specific processing for this job that you don’t want in the general use reader etc. As with most listeners, it is just a chance for you to get control around the mainline processing and do some extra stuff.
|
OPCFW_CODE
|
A Collaborative Framework to Ensure
Sustainable and Legal Trade of Reptile Skins
Sandton Convention Centre - Exhibition Room 2D
Illegal trade and trafficking of wildlife has become an issue of global proportions and concern, putting at risk the existence of species all over the planet and the livelihoods and security of millions of people.
In response to this problem, global efforts have been scaled up to combat illicit activities and the networks and commercial routes that stimulate and sponsor wildlife trafficking and illegal trade.
For species and derived products that have trade bans such as ivory, rhino horn and other Appendix I species, a number of very promising results have been seen in recent years that attest to the increasing effectiveness of actions to reduce demand, strengthen control and enforcement, and investigate and prosecute offenders, among others.
For species and derived products where international trade is allowed, there are a number of opportunities that could explored and promoted to combat illegal trade more effectively by strengthening the frameworks and tools allowing for sustainable and legal trade to flourish, while at the same time empowering local communities as custodians of biodiversity and crucial partners in eradicating illegal activities.
Such is the case of many species of crocodilian, snakes and lizards. The reptile industry provides livelihood opportunities for millions of people around the world and for many years has pioneered market-based initiatives for the sustainable use. Exploring ways to strengthen this industry and its regulatory frameworks could have concrete and long-lasting effects on creating market-based positive economic incentives for conservation of species and habitats, the humane treatment of the animals, and the eradication of illegal trade.
The overall goal of the side-event will be to position the sustainable use of crocodiles and snakes as a way to contribute to and achieve CITES goals and empower communities to effectively participate as active partners in these efforts.
This objective will be achieved by putting forward a proposed collaborative framework for ensuring sustainable and legal trade of reptile skins and its contribution to the conservation of the species and their habitats.
Specific objectives of the side-event will include providing examples and results of work being undertaken to advance this framework, including on:
• Empowering local communities to become the main custodians of the species and their habitats.
• Developing best-practices on wild harvesting based production systems.
• Improving NDF models development and implementation.
• Exploring the potential of complementarity between wild-harvesting and captive breeding systems to increase production volumes sustainably.
• Advancing species identification and traceability methods to strengthen control and enforcement.
17h30 – 17h35 Welcome remarks and introduction to the meeting
Mr. Eduardo Escobedo, RESP
17h35 – 18h50 Panel discussion: Effective tools and methodologies to advance sustainable production, management and enforcement of reptile skin trade.
The panel will provide a platform for discussion on potential elements that could be included in a collaborative framework for reptile skins including presenting case-studies on a number of ongoing initiatives working in this direction.
Moderator: Ms. Auria Dwi Putri, RESP – What could a collaborative framework look like from the RESP perspective?
Ms. Isabel Camarena, CITES Scientific Authority of Mexico
Complementary production systems: a community-based ranching protocol for Morelet’s crocodile
Ms. Ratna Kusuma Sari, CITES Management Authority of Indonesia
Development of a traceability systems for reticulated Python
Mr. Mark Auliya, Helmholtz Centre for Environmental Research and Ms. Gillian Murray-Dickson, Royal Zoological Society Scotland
Forensic applications based on population genetics
Ms. Ségolène Trevidic, LVMH Group
An industry perspective on the importance of ensuring sustainable and legal trade
Ms. Alejandra García Naranjo, RESP
Opportunities and challenges to linking these elements together
Questions and interactive discussions
18h50 – 19h00 Closing remarks
|
OPCFW_CODE
|
Health & Monitoring
Kaleido environments surface a rich suite of health and monitoring metrics for granular insights into your resource stack's health and performance and underlying blockchain.
From the lefthand navigation within your environment, expand the Health & Monitoring tab to display the available categories.
Displays the CPU, Memory, and Disk Consumption of environmental resources.
Take note of the dropdown menus at the top of the screen. They provide you with customizable parameters to filter amongst relevant details to your organization and overall network.
- Types - The types of resources you wish to view. Choose between
ALLor enumerate a specific service type.
- Runtimes - The specific runtimes you wish to see metrics for.
- Metric Interval - The display interval for the underlying metrics. Choose between 5, 15, 30, 45, or 60 minute intervals.
- Time Frame - The overall window you wish to see metrics for.
Provides information about the following metrics for one or more runtimes. Use the dropdown menus at the top of the screen to filter amongst types and specific runtimes.
- CPU Utilization - Average CPU utilization (%) over time.
- Memory Utilization - Memory Utilization (MB) over time.
- Peer Count - This represents the number of other nodes in the chain that a node is directly connected to.
- Storage - Total disk usage from data generated by the node (levelDB chain/state information and logs).
Click the VIEW RUNTIME button beneath a resource to see additional details.
Provides time-based filterable information about your blockchain layer.
The metrics include:
- Max Blocks per minute
- Min Blocks per minute
- Avg Blocks per minute
- Avg Block Time
- Avg Transaction per minute
- Pending Transactions
- Blocks per hour
- Transactions per hour (toggle between public, private, or all transactions)
The lower portion of the panel provides blockchain-based information specific to your nodes. Use the dropdown windows to customize the displayed runtimes and intervals. The graphs display:
- Pending Transactions - properly signed and submitted transactions waiting to be processed; can occur if the
targetGasLimitfor the block has been reached before the transaction can be executed; or if there is an unavailable CPU available to mine the transaction during the current block period.
- Queued Transactions - transactions in a node's memory pool but cannot be processed due to nonce mismanagement or a preceding transaction being dropped from the queue.
- Peers - the number of peers each node is connected to via P2P.
- Block height - current block height for the chain.
Note that a healthy environment will show an increasing block height over time (except if the consensus protocol is RAFT, in which case blocks are only produced on demand - when transactions are mined) and nodes that have peer connections to all other nodes in the chain.
My Documents & Messages
Provides tracking information on the number and size of documents/messages sent and received, as well as the number of documents in storage. Filtering for specific runtimes and date ranges is available in the top right.
|
OPCFW_CODE
|
With enterprise workforces becoming more mobile and distributed, IT teams have been transitioning at least part...
of their administrative workloads from System Center Configuration Manager to enterprise mobility management products, such as Microsoft Intune.
SCCM is systems management software for managing large groups of computers, including those running Microsoft Windows, Apple macOS, Linux and Unix. Administrators can use SCCM to distribute software, enforce security policies, monitor systems and more.
Intune is a cloud-based enterprise mobility management (EMM) service that uses a device's built-in mobile device management (MDM) capabilities to manage the device and its apps. In addition to mobile devices, administrators can use Intune to manage computers running Windows 10.
In the past, IT had to choose between SCCM and Intune to manage Windows computers. Activating the SCCM client on a Windows device automatically disabled any built-in MDM capabilities. Microsoft assumed that customers would migrate devices to Intune as a group, so there would be no need to permit simultaneous management.
Many organizations, however, required co-management capabilities. For example, an organization might still support Windows 7 computers, which require the SCCM client, or have invested in customized products that integrate extensively with SCCM, making an all-out move to an EMM platform impractical.
What IT needs is a way to bridge the old and new systems so it can move devices incrementally, taking a phased approach to EMM.
Bridge to modern management
Microsoft added co-management capabilities to the SCCM ecosystem to simplify the transition to Intune. As a result, IT can take incremental steps toward a modern management option, while still supporting its legacy systems.
Co-management delivers a bridge between SCCM and Intune, simplifying the process of moving administrative tasks, while minimizing the risks associated with such a move. Currently, co-management only applies to Intune, not other EMM products. Even so, co-management represents an important step toward easing the burden of transitioning to a modern management tool.
This phased approach is possible because of several important changes to Windows 10 and SCCM technologies. The first occurred when Microsoft released Windows 10 version 1607 -- the Anniversary Update. Prior to this release, IT could not join a Windows 10 computer to both on-premises Active Directory (AD) and Azure AD at the same time.
The next important change came with Windows 10 version 1709 -- the Fall Creators Update. With the new release, the SCCM client could run on a device without the MDM capabilities being disabled, making it possible for SCCM and Intune to manage a Windows 10 device at the same time. Shortly after the update, Microsoft released SCCM version 1710, which included the features necessary for co-management.
Together, these changes enable administrators to designate which management workloads SCCM should handle and which workloads Intune should handle. For example, IT can continue to use SCCM to distribute software and manage security, but use Intune to control Windows 10 update policies and resource access policies.
Migrating workloads to Intune
Administrators can use the co-management features for Windows 10 computers whether they manage the devices with SCCM, Intune or another product. Regardless, IT must install the SCCM client on each device. In addition, IT must concurrently join all co-managed clients to on-premises AD and Azure AD and register them as managed devices for both SCCM and Intune.
After IT enables the clients for co-management, administrators can use the SCCM management portal to configure which workloads to move to Intune. SCCM supports three co-management workloads, with each workload tied to a specific set of policies:
- Compliance policies determine the rules and settings with which a device must comply.
- Resource access policies configure a device's VPN, Wi-Fi, email and certificate access settings.
- Windows Update policies control updates for Windows devices managed by Window Update for Business.
For each workload, administrators can choose from three options to manage policies. The default option specifies that SCCM should manage the policies. The second option sets up a pilot for testing policy management in Intune. Administrators can designate which client devices participate in the pilot. The third option specifies that Intune should manage all the client devices for the selected workload.
Microsoft has suggested that additional co-management workloads will eventually be available, but the company has provided no official details on what to expect or when, although it seems inevitable that the company will continue on this trajectory.
SCCM and Intune co-management
The three workloads might represent only a small step toward co-managing Windows 10 computers, but it's important nonetheless. Organizations that have been locked into SCCM might finally be able to move out from under its mammoth shadow without putting their current systems at risk.
The question remains whether Microsoft will open up these co-management features to third-party EMM products so they too can benefit from phased migrations.
|
OPCFW_CODE
|
Understand the concept of infrastructure as code
Infrastructure as Code (IaC) is a key concept in the world of cloud computing and cybersecurity. It refers to the practice of defining, provisioning, and managing IT infrastructure through code rather than manual processes. IaC is a fundamental shift in the way we manage and operate infrastructure resources, introducing automation, consistency, and scalability benefits.
Key Benefits of Infrastructure as Code
Consistency: IaC ensures that your infrastructure is consistent across different environments (development, staging, and production). This eliminates manual errors and guarantees that the infrastructure is provisioned in the same way every time.
Version Control: By managing your infrastructure as code, it allows you to track changes to the infrastructure, just like you would with application code. This makes it easier to identify issues and rollback to a previous state if needed.
Collaboration: IaC allows multiple members of your team to collaborate on defining and managing the infrastructure, enabling better communication and visibility into the state of the infrastructure.
Automation: IaC enables you to automate the provisioning, configuration, and management of infrastructure resources. This reduces the time and effort required to provision resources and enables you to quickly scale your infrastructure to meet demand.
Common IaC Tools
There are several popular IaC tools available today, each with their strengths and weaknesses. Some of the most widely used include:
Terraform: An open-source IaC tool developed by HashiCorp that allows you to define and provide data center infrastructure using a declarative configuration language. Terraform is platform-agnostic and can be used with various cloud providers.
AWS CloudFormation: A service by Amazon Web Services (AWS) that enables you to manage and provision infrastructure resources using JSON or YAML templates. CloudFormation is specifically designed for use with AWS resources.
Azure Resource Manager (ARM) Templates: A native IaC solution provided by Microsoft Azure that enables you to define, deploy, and manage Azure infrastructure using JSON templates.
Google Cloud Deployment Manager: A service offered by Google Cloud Platform (GCP) that allows you to create and manage cloud resources using YAML configuration files.
Best Practices for Implementing Infrastructure as Code
Use Version Control: Keep your IaC files in a version control system (e.g., Git) to track changes and enable collaboration among team members.
Modularize Your Code: Break down your infrastructure code into smaller, reusable modules that can be shared and combined to create more complex infrastructure configurations.
Validate and Test: Use tools and practices such as unit tests and static analysis to verify the correctness and security of your infrastructure code before deploying it.
Continuously Monitor and Update: Keep your IaC code up-to-date with the latest security patches and best practices, and constantly monitor the state of your infrastructure to detect and remediate potential issues.
|
OPCFW_CODE
|
Welcome to the UMBC Cyber Dawgs’ homepage. We are a group of UMBC students who share a common interest in computer and network security.
There are a few things you can do to get involved with the club.
President: Bryan Vanek (bvanek1 @ umbc.edu)
Vice President: Anh Ho (a150 @ umbc.edu)
Secretary: Zack Orndorff (zo1 @ umbc.edu)
Treasurer: Christian Beam (cbeam3 @ umbc.edu)
We hold general meetings once a week. No prior experience is required to attend our meetings. We encourage anyone who wants to learn more about cyber security and how to start learning new skills in the field to come to our meetings.
What to expect from a meeting:
Our meetings vary in subject material week to week. If you have specific topics you would like to learn more about please email our club secretary. If you would like to present a talk or teach a new skill please email our club president.
Our meetings for the fall semester were an introduction to computer security. We went over many of the foundational topics in computer security.
Our meetings for the spring semester will continue with that trend. Some topics will be a little more in-depth, but our goals of helping beginners remain unchanged. Please see our calendar for more information.
If you already have experience, don’t worry-you’ll be able to meet like-minded folks, and we will have plenty of more challenging activities available.
For a full description of our schedule, take a look at the calendar below.
Along with our normal club meetings, we also provide great opportunities for individuals who are interested in learning about and participating in cyber related competitions. This upcoming school year, we want to take this a step further with some new initiatives that are currently in the works. These include, but are not limited to:
Hosting our own CTF (Capture the Flag) event at UMBC. It’ll be accessible to beginners, but entertaining for all. Mark your calendars for March 11, 2017. More details to come.
Increasing collaborative and promotional efforts with other UMBC organizations
Providing a comprehensive list of resources for various cyber security related topics
However, if these are all going to come to fruition, we need your help to make it happen! We have a myriad of tasks and projects that will need to be done over the next year, so stay tuned…
National CCDC 2017: 1st place
Mid Atlantic CCDC 2017: 1st place, advanced to national CCDC
CSAW 2016: 16th (undergrad, North America)
Mid Atlantic CCDC 2016: 4th place
MDC3 2015: 1st place
National CCDC 2015: 4th place
Mid Atlantic CCDC 2015: 1st place, advanced to national CCDC
Past club members: if we’re missing anything, please contact our secretary!
The majority of our week to week announcements are done via our mailing list. These announcements include upcoming events, weekly meeting information, and cybersecurity related information.
To join our mailing list please send an email to umbccd-subscribe @ lists.umbc.edu. As a rule we only accept umbc.edu email addresses. If you do not have a umbc.edu email and would like to join, please contact our secretary.
You must have a umbc.edu email address to signup on Slack. ↩
|
OPCFW_CODE
|
Alright , so this is as close as I got for today... I just saw you gave an in depth answer so first I have to thank you for that and I will try the tomorrow or the next day and let you know how that works out.
So for this image I figured out a way to mix stencils to paint this plane , which is almost exactly what, the only thing I need to figure out is how ...
Consider a mix shader node in your shader nodes where you can change the contribution of each texture. A texture can determine the contribution pixel by pixel. Two standard standard textures linked to Two BSDF shaders linked to a mixed shader. The relative contributions are mixed in a ratio specified by a hand painted texture. Image above.
Texture slot ...
Track to Quaternion
To align to vertex normal.
The normal orientation with respect to vertices appear to be a track to quaternion, tracking Z in direction of normal, and using -Y as up. Since the only options for up in Vector.to_track_quat(to, up) are 'X', 'Y', 'Z' track with Y up and inverting the scale in X and Y.
Simple example. Run in object mode, ...
So the problem wasn't in the "Merge by distance", but rather because after I imported the model all edges were marked as sharp and I cleared them. If I don't clear them everything works fine. So to anyone having the same problem leave the edges marked as sharp.
While the question is rather old and solved, it's still an important and underestimated topic leading to other questions coming up, so i would like to add one approach i did not see in the available answers here, for completeness sake.
Normals (or the orientation of the faces) in Blender can be made visible in 2 ways, one way would be with little lines ...
Note: This is only a partial answer, since more information were provided in the comments after it was written. Color management is nonetheless relevant for a correct output.
The reason your correct value of $0.5$ is saved as $0.735357$ is because you're using the Standard view transform and your Display Device is set to sRGB.
The scene linear values are ...
I would recommend bmesh for this.
No face selection required, no toggling mode, no bpy.ops.mesh... operators.
Because a recent answer used an edit mode bmesh, here is an edit mode version
Translates all vertices 0.1 locally in the direction of their normal for each face.
Example scripts, move all faces along their normals.
context = bpy....
There are a few different things that might be causing your problem.
Firstly, there's no way to add thickness without increasing the tri/poly count. Polygons only have one face, so if you view them from the "wrong" side, they are invisible. There is no way to avoid this. If you want to see both sides, you have to have another polygon that is facing the ...
It's because the vertices of the faces are listed in different orders.
The first face (red) is listed clockwise, and the second face is listed anti /counterclockwise, when viewed from this side.
The normal of a triangle can be calculated from the cross-product of two of its edges. The function is not commutative, so the order in which the edges are taken ...
Unfortunately i dont know a way to display the normal vector in the viewport.
Here are two options to get the normal vector. I'd say it depends on what you want to do with the normal vector after getting it...
Option 1 Python:
mesh = bpy.context.object.data
selected_verts = [v for v in mesh.vertices if ...
Example of assigning custom normals based on vertex selection. It's just a slight modification of the code given in this answer and assigns a custom vector to vertices in selection.
Run the script in Object Mode and enable show_split_normal property in Edit Mode:
context = bpy.context
ob = context.object
me = ob.data
me.use_auto_smooth = True
|
OPCFW_CODE
|
we are developing a multi vendor delivery website (like Glovo, Just Eat and similar) based on the Foodpicky theme (The site is [login to view URL]). This theme comes with WC Vendors Marketplace plugin, a WooCommerce multi vendor plugin.
It works fine with that, but we prefer WCFM Marketplace plugin by WClovers, another WooCommerce multi vendor plugin with some extra features.
In order to let the theme working with WCFM we need some costumizations.
At the moment, front end pages display several info taken from user profile and from WC Vendors configuration.
Since we don’t use WC Vendors we need to have all this info taken from the WCFM vendor dashboard. We don’t even want to use info from user profile becouse similar info are stored in the WCFM dashboard too, and since vendors must use just one control panel we prefer them to use only the WCFM dashboard.
Most of these info are managed by the Foodpicky control panel and it’s possible to place them in various page hooks.
We indentified PHP files where these info are retrieved from the database but we are not so skilled to edit them. We think most of the job consists in switching these info sources from one db table to another.
In this way, Foodpicky control panel should keep on working just the same as now.
Info to be taken from WCFM (instead of the ones from user profile and WC Vendors) and to be displayed in the website are:
store opening hours;
all the store images (logo, banner, mobile banner);
delivery time (WCFM processing time);
minimum order amount.
All this information is managed by a configure template of the AZEXO plugin editor and build page ([login to view URL]), which manages the vision of the seller page and choose where to display this information. The Editor lets you choose for each field which customer field should display. So the job is that these customer fields instead of taking them from the "Prodile" of the parent theme must take them from the "WCFM" plugin. (for more information look at this page [login to view URL])
Furthermore, we need a “customer delivery time” module to be developed: when a customer places an order he must specify, in the check out page, when he prefers the order to be delivered.
There must be a check box for “as soon as possible” and a calendar for selecting day and time of delivery. If “as soon as possible” is checked, calendar must be disabled, otherwise customer must choose a day and time (timesteps by 30 minutes) of delivery starting from the time he places the order up to 48 hours (2 days) from that time.
Available day and time of delivery must match with actual vendor opening hours (and days) taken from the WCFM data of course. These informations should be previously acquired during vendor configuration via WCFM control panel.
We need to know exact price and time needed for all this to be done. Please consider that we have very short time.
45 pekerja bebas membida secara purata €254 untuk pekerjaan ini
Dear Hiring Manager, I can help you in the project. Please let me know when we can discuss in more details so that we could proceed further. Looking forward for your response. Thanks. Kind Regards, Surya
|
OPCFW_CODE
|
Follow these example steps to insure that DNS traffic is sent to the Zscaler Trusted Exchange (ZTE) from Client Connector users and endpoints.
The steps below are intended to be a simplistic explanation for the purposes of highlighting general guidelines for configuring Client Connector. It is important to consult the more comprehensive Helpdocs configurations in order to fully assess the impacts and decide on an approach (among other considerations) prior to making changes or enabling.
Step 1: Enable Tunnel 2First step is to ensure that the Client Connector can send non-web traffic. To do this, ensure that Tunnel 2 is enabled.
If migrating from a Tunnel 1 deployment to a Tunnel 2 deployment, there are a series of best practice recommendations that should be examined to do this in both a comprehensive and non-disruptive manner. It is therefore important that the Helpdocs for tunnel 1 to 2 migration are consulted, understood, and part of the migration plan.
Two further considerations here:
First is that Cloud Firewall policy will apply to any non-web traffic that is sent to the Zero Trust Exchange (ZTE and specifically ZIA). Today, Cloud Firewall can apply policy to traffic forwarded via Tunnel 1 or forwarded from fixed Location deployments (GRE, IPSec, etc). Please see the Cloud Firewall Helpdocs and take note of configurations like the “Enable Firewall for Z-Tunnel 1.0 and PAC Road Warriors” in Advanced Settings and the required enablement of Cloud Firewall for each fixed Location in the Location Management of the Admin settings.
Second, enabling Tunnel 2 is also a step towards enabling Cloud Firewall for your users. This ensures that corporate security follows these users wherever they go and branches and locations are no longer constrained by the functional and operational limits physical or VM/logical legacy-generation firewalls. This means that all traffic is examined with DPI, any non-standard web is directed to the SWG, and IPS signature rules are applied to all non-web threats – all in addition to DNS Control for standard DNS and DNS over HTTPS (DoH).
A final note is that new Zscaler customers will soon have Tunnel 2 enabled by default and this will become the standard Client Connector deployment especially targeting remote users (Road Warriors). The above described first step is for existing customers who have not already enabled Tunnel 2.
Step 2: Set your Includes and Excludes
The Includes tell what IPs, ports, protocols, and domains should be sent to the ZTE for DNS Control to examine. The Excludes indicate what should not be sent. Generally, the most specific designation here wins.
To ensure that standard DNS is sent we want to add a more specific Destination Inclusion to target just the standard DNS traffic and enter “0.0.0.0/0:53”.
Domain Inclusions for DNS can start by simply assuming the asterisk wildcard meaning “all domains should be included and sent to ZTE”. Consider adding any private domains that are not publicly resolvable to the Domain Exclusions list like “*.INTERNAL” or “*.MYCORPDOMAIN.NET” etc.
Step 3: Modify Further According to Real Use Case
Any DNS over non-standard ports needs to be explicitly added to the port Includes but also the DNS network service needs to be modified to include the non-standard DNS.
Also, DNS over HTTPS (DoH) is not considered non-standard DNS and if web traffic is already being directed to the ZIA then DNS Control policy will apply to DoH traffic as it does to
Be sure to consult the Helpdocs for Client Connector for more details: Policy & Administration Settings
|
OPCFW_CODE
|
interfaces. It is, therefore, easy for developers to build visually-appealing applications.
1] More Than Just a Set of Guidelines
What makes Material Designs different from design guidelines in general is the fact that it is an entire ecosystem. In other words, it has predefined solutions for different design situations through use cases that can be easily referred to by the designers. This feature is a favourite with UI designers.
2] Access to Systematic Documentation
Like all other products of Google, Material Designs also comes with the Google “advantage” and in this case there exists detailed documentation to help designers understand, explore and start using the set of guidelines without any trouble. This support is much welcomed by designers across the globe who find it easy to start using Material Designs.
3] The Element of Flexibility
In spite of the fact that Material Designs has predefined guidelines for every design scenario, it also offers an element of flexibility. The designer has the freedom to work with the different design elements and choose how to implement them. Thus, it offers the perfect balance of rules and flexibility, allowing the designers to apply their creativity.
4] More Intuitive in Nature
While design is subjective and opinions about a design vary from one person to the next, it has been observed that the Material Designs layouts can be considered to be more intuitive for most users, as compared to the more flat design approach that existed earlier. This is one of the factors behind the widespread use of Material Designs.
5] Ideal Design System for Mobile Apps
Material Designs is one of the most compatible design systems for mobile apps as it was originally developed for designing Android applications. Since mobile apps are increasing with the growth in the number of smartphone users, this design system is gaining popularity with UI designers across the globe.
Material Designs has also launched the Dark Theme which offers more flexibility to the designers in terms of experimenting with designs.
Whether you go gaga for Material Design or gag looking at it, the “card” or “paper” concept with a focus on surfaces and edges continues to be a popular and broadly applied application style.
Material-UI, the React component library based on Google Material Design, allows for faster and easier stylized web development. With basic React framework familiarity, you can build a deliciously material app with Material-UI, and its almost like cheating. Almost.
This MIT-licensed open source project is more than just parlor tricks though and can get deep quickly. But don’t let me scare you! I recently built an app with Material-UI for the first time, and by the end I was delighted. Here are 5 things I appreciated about Material-UI:
1] It’s well documented
The official documentation is organized and easily navigable. The library’s popularity means you have access to tons of code examples on the web if the documentation is confusing. You can also head over to StackOverflow for technical Q&A from Material-UI devs and the core team.
2] Regular updates
With the recent release of Material-UI v4 (May of 2019) and blog posts with new features and future goals posted monthly, Material-UI doesn’t look like its dying down anytime soon. Along with feature updates and improvements, here are the GitHub stats from the latest blog posted November 8th:
We have accepted 182 commits from 68 different contributors. We have changed 1,157 files with 31,312 additions and 9,771 deletions.
3] Consistent appearance
Okay, this is kind of cheating because its a library, so of course the appearance is going to be consistent. BUT Material-UI is a HUGE library, and the benefit is you have some choices.
Aesthetic preferences for Material Design aside, your web project has a high chance of retaining similarity in appearance and functions all throughout.
4] Creative freedom
You don’t have to have a consistent appearance if you don’t want to!
I know, I know – I just said that it creates a consistent appearance, but that’s out of the box. As I hinted at earlier, there is actually quite a lot of depth to the Material-UI components and the developers encourage customization. Material-UI doesn’t force Material Design style on you, it just offers it.
One delightful component was the ThemeProvider. Placed at the root of your app, you can change the colors, the typography and much more of all sub-Material-UI components! However, this is optional; Material-UI components come with a default theme. Code magic.
|
OPCFW_CODE
|
Undoubtedly the release number will continue to increment as I find and fix little bugs in my work on the AI improvements branch. Either way, v0.4.2.x is now the stable branch, carrying with it two major features: variations in replays, and some additions to the stable of rules.
First, variations: the headline feature for v0.4.x, variations provide players the ability to play out alternate histories in replay mode. These alternate histories can be saved and loaded, as well as annotated (and, annotations now have a handy new in-game editor for ease of production).
Second, rules: I finally got around to implementing the last two rules preventing OpenTafl from playing almost all known tafl variants. Not coincidentally, OpenTafl now supports every rule supported by PlayTaflOnline.com, so the bot player there can compete on every front. Poorly.
That brings me to my last point for today: AI improvements. What’s on the list? Well, I have a few things on my list.
First thing’s first: I have to characterize OpenTafl’s particular failings. The ones I can most easily fix rest in the evaluation function. If the evaluation function provides inaccurate evaluations of board states, then all the rest—heuristics and fancy pruning alike—rests on an unsteady foundation. The analysis engine feature aids me in this quest. My workflow for this initial phase goes like this: I play a game against the AI, using the analysis engine to feed me the AI’s moves. When the AI makes a move which is obviously bad, I go to the replay, then tell the analysis engine to dump its evaluations for the states in question. By comparing those evaluations with evaluations of the move I would prefer, I can begin to see what the AI sees.
I’ve already made two interesting discoveries regarding AI weights and preferences. First, it places far too much importance on guarding or attacking the king, to the point that the attacker AI will happily sacrifice piece after piece if only it puts the king in ‘check’. Second, flowing out of the first item, it has a badly inaccurate view of what constitutes threatening an opposing piece. When it decides to threaten an enemy piece, it will happily do so by moving one of its own pieces into a position where it can immediately be recaptured. Oops.
So, once I’ve made changes to fix those mistakes, how do I verify that I’ve made a positive difference? I play the AI against old versions. Thankfully, building older versions of OpenTafl is trivial. (If you’ve been following development, you’ll no doubt have noticed that there’s a Mercurial tag for every release.) I have a little tool which is intended to do Elo ratings for chess clubs, but which will serve to do Elo ratings for OpenTafl versions just fine. This helps quantify not only whether a version is better than another version, but by how much.
Once I have the evaluation function a little more settled, I can move onto some extra heuristics. I have two in mind for the moment: the killer move heuristic, and the history heuristic (or some related heuristic). The killer move heuristic is the more promising one. It assumes that most moves don’t change the overall state of the board too much, and there are likely to be, at a given depth in the tree, only a few plausible moves to push the evaluation in your direction. Therefore, if any of those moves are possible at that depth, the AI should try them first.
The history heuristic is more complicated. There are three variations I’m considering. First, the straight history heuristic, which orders moves by how often they’ve caused a cutoff. This prefers, in the long run, making moves which have been good elsewhere in the tree. Straightforward, compared to the others.
Second, the butterfly heuristic, which orders moves by how often they occur anywhere in the tree. This prefers making moves which are frequently considered. This one is a little more subtle. Alpha-beta search, by its very nature, ends up searching only the most interesting moves, pruning away the rest. The butterfly heuristic, by tracking moves which turn up a lot, is essentially tracking which moves are more interesting.
Finally, the countermove heuristic, which tracks which moves are the best responses to other moves, and weights moves matching the known good countermove more heavily. This one requires very little explanation.
So, in the next month or two, I expect the strength of OpenTafl’s AI to improve considerably. Stay tuned!
|
OPCFW_CODE
|
Shows No Video Found even if video exists
I was creating transcripts for around 10k videos using a cron. After creating around 260 transcripts, this library started giving error, video not found. However, I checked that the video existed and its CC also existed. Is there any rate limit imposed by youtube? If yes, can you describe it in detail and how can I overcome it?
Also shouldn't the library throw 429 error in that case? Please reply to this as it is urgent.
I have fetched transcripts for many thousands videos at once without running into any rate limits. Unless they recently implemented something like that, I don't think that this is the problem.
Few questions:
Can you name some IDs it is failing for?
Is the error replicatable?
does it always fail for exactly the same videos?
Please check this ID: 3sibOgsok1Q
It says the video is no longer available, though it should say that the subtitles are disabled for this video (As it shows most of the times)
Also, please check this YouTube ID: wehYkSa2oAg
It gives me the same error though the captions are available. Can you provide a quick fix for me?
Here is the traceback of the error
`Traceback (most recent call last):
File "/home/apps/haygot/content/services/transcript_service.py", line 19, in download_transcript
transcript = YT.get_transcript(self.youtube_id, languages=['en'])
File "/home/apps/.local/lib/python3.6/site-packages/youtube_transcript_api/_api.py", line 128, in get_transcript
return cls.list_transcripts(video_id, proxies, cookies).find_transcript(languages).fetch()
File "/home/apps/.local/lib/python3.6/site-packages/youtube_transcript_api/_api.py", line 70, in list_transcripts
return TranscriptListFetcher(http_client).fetch(video_id)
File "/home/apps/.local/lib/python3.6/site-packages/youtube_transcript_api/_transcripts.py", line 34, in fetch
self._extract_captions_json(self._fetch_html(video_id), video_id)
File "/home/apps/.local/lib/python3.6/site-packages/youtube_transcript_api/_transcripts.py", line 42, in _extract_captions_json
raise VideoUnavailable(video_id)
youtube_transcript_api._errors.VideoUnavailable:
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=wehYkSa2oAg! This is most likely caused by:
The video is no longer available
If you are sure that the described cause is not responsible for this error and that a transcript should be retrievable, please create an issue at https://github.com/jdepoix/youtube-transcript-api/issues. Please add which version of youtube_transcript_api you are using and provide the information needed to replicate the error. Also make sure that there are no open issues which already describe your problem!
ERROR 2020-02-25 15:44:25,797 transcript_service 24067<PHONE_NUMBER>06848
`
I can not replicate these errors.
When I run YouTubeTranscriptApi.get_transcript('3sibOgsok1Q') I'll get an TranscriptsDisabled exception. Which is correct as the transcripts for this video are disabled.
And when I run YouTubeTranscriptApi.get_transcript('wehYkSa2oAg')it returns the correct transcript, as expected.
Do you also run into those errors, if you just execute these requests on their own, or does it just happen if you do a lot of requests?
You can share some more code if you want and I can have a look at it, but it seems that this error is specific to your network. As I mentioned before I have also previously fetched more than 50k transcripts at once without running into any rate limits. So either theses rate limits are specific to your country or there's some kind of bottleneck in your network.
Maybe try fetching the transcript for the same video (pick one you know that it is working for) for like 10k times in a row. That way you can make sure it's not a problem with the video or API itself, but there's some kind of rate limit, if you'll start running into issues after a certain number of retries.
Okay sure will try and get back to you. Thank you for your time. Unfortunately, I cannot share the code as it is supposed to be confidential as per the organization's guidelines.
@prakhardg I will close this now, as the underlying issue doesn't seem to be related to this module. However, feel free to post updates in here, maybe they'll be helpful to someone else.
@prakhardg Odds on you've worked through your implementation.
Worth leaving here that from what I've been working on, I ran into the same problem. Appears that the YouTube infrastructure stops offering up responses to requests from the requesting IP for data after a certain number of queries. In my case, using a server in the SF area, pulls aren't successful after ~480 episodes in an hour and once blocked it's blocked for 24 hours. Suspect it's some kind of DDOS protection kicking in.
To make sure it was IP based, I logged into a separate server and made the request for the target video and it worked without issue and same for my local dev machine's IP address.
Yesterday I slowed the function down by sandbagging it with a 8 second delay at the end which effectively gets my script pulling one video every ~10.5 seconds including HTTP response time and processing of the data coming back. Gets me in the 350-380 episodes an hour range and well below the observed maximum before being blocked. Was able to collect all the episodes for a specified channel with more than 2K episodes without getting the server blocked from making pulls.
|
GITHUB_ARCHIVE
|
I have seen your requirement to develop an App with required features.
I have strong expertise to accomplish this in decided time frame.
We provide a quality work and support. Please visit my profile to view ouMás
Hope you are doing well.
We are much expertise group of iOS, Android, Website developers and Designers
Please review my profile - https://www.freelancer.in/u/rijutapatidar
I understood the initial scope of Mobile aMás
I have read the project description and found that I have worked on a similar project before that's why I can do it very fast at a fair price.
I am a professional android, iOS and react native developer wiMás
Gretting of the day!!
I have gone through your requirements and understood perfectly
I have experienced many challenges in web development and have been working on maintaining versions of source code using HTML, CMás
I have gone through your requirement. I am an Android developer with more than 8 years of experience in app designing and development. Please open a line of communication with me so we can diMás
i can create I want a developer to develop my mobile app. From scratch
I am an experienced Android and ios developer and equipped with all the necessary skills to provide you best website that completely satisMás
hope you're doing well
I have more than 8 years experience in Android(java+kotlin),React native, Flutter
I’m excited to share with you the proposal for the your requirement. I’m truly excited to be working witMás
Hi, I am a professional hybrid and native mobile app developer. I have top skill about React Native, Flutter and Android so I want to discuss more details about your task over chat so pls send me your message. Thanks. Más
--------------------------Mobile application Developer ----------------------
I have gone through your requirements and understood perfectly. I have more than 10 years of experience in Mobile apps Android , iPhone, NatMás
I am experienced in building android applications for more than 2 years. I would like to assist you in building this project. Please visit the portfolio on my profile.
I would be happy if you could share yoMás
Thanks for providing us an opportunity to bid on your project. I will complete the work as per your requirements. I will do my best to make you satisfied with high quality and responsibility. I have more than 8 Más
I am a computer programmer and app developer. I am very good in programming and Data Structure. I mostly use flutter and native android to build apps. I have been in this field since 9 years. I also have good knowlMás
Hope you are doing well there,
This is Faisal and i am having more than 5+ years of experience in Android/iOS Application Development for Native or Cross Platforms.
Some of my delivered projects you can go tMás
Hope you are doing well!
As you need a mobile developer who have skills like Java, Kotlin, Swift, Firebase, SQLite, MySQL, Google Analytics etc. I've had all the expertise you expect from a developer. I wilMás
I am an experienced Web and Mobile app developer with projects ranging from simple websites to ERP and LMS systems. I develop dynamic Mobile applications which would perfectly cater to your requirements.
I am new to Más
I have been working on Flutter App Development from 2019 and i had implemented features as following;
|
OPCFW_CODE
|
class Rack {
private Node[] rackArray;
//Metode for å opprette en ny rack med plass til antNoder antall noder.
public Rack(int antNoder){
rackArray = new Node[antNoder];
}
//Metode for å legge til en node i en ledig rack. Travasjerer Arrayet til den finner en "tom" plass hvor den legger til noden, deretter hopper dne ut av løkken.
public void settInnNode(Node node){
for(int i = 0; i < rackArray.length; i++){
if(rackArray[i] == null){
rackArray[i] = node;
break;
}
}
}
//Metode som returner antall noder i en rack.
public int getAntNoder() {
int teller = 0;
for(int i = 0; i < rackArray.length; i++){
if(rackArray[i] != null){
teller++;
}
}
return teller;
}
//Metode som returnerer antall prosessorer i en rack.
public int antProsessorer(){
int antall = 0;
for(int i = 0; i < rackArray.length; i++){
if(rackArray[i] != null){
antall += rackArray[i].antProsessorer();
}
}
return antall;
}
//Metode som returnerer antall noder med nok minne. Nok minne som parameter.
public int noderMedNokMinne(int paakrevdMinne){
int antall = 0;
for(int i = 0; i < rackArray.length; i++){
if(rackArray[i] != null){
if(rackArray[i].nokMinne(paakrevdMinne)){
antall++;
}
}
}
return antall;
}
}
|
STACK_EDU
|
This module contains generally useful, atomic commands that aren’t otherwise categorized into a dedicated module.
These commands might be safe for use by anyone, or locked behind in-Discord permissions.
NSFW Images Detection Tools¶
GiselleBot implements an (experimental) NSFW images detection system using TensorFlow.js as its base.
The detection system is based on Infinite Red’s NSFW JS library and GantMan’s Inception v3 Keras Model for NSFW detection to classify any image as a composition of 5 categories:
Drawings: Safe for work drawings (including anime).
Hentai: Hentai and pornographic drawings.
Neutral: Safe for work neutral images.
Porn: Pornographic images, sexual acts.
Sexy: Sexually explicit images, not pornography.
The module was furtherly converted into a back-end module and customized with a caching system to enhance its performance.
This interesting article by Infinite Red explains the reasons behind the creation of the original NSFW JS client-side module.
This module, by no means, is supposed to reliably recognize all NSFW images. Its main purpose is quickly classifying provided images and supporting humans in better moderating a server.
The module itself will not store or expose any sexually explicit images. The output will not contain a direct link to the original image, and a censored (low resolution, blurred) version of the image will be locally cached and used to refer to the original image.
Here’s an example of an output of this command, and the corresponding censored image:
For those of you with a background in image processing - yes, Lenna is actually flagged as NSFW with a confidence score of 81.9%!
If you don’t know what I’m talking about, refer to this Wikipedia page.
!nsfwcheck (image URL, or image as a message attachment)
Submits an image against the GantMan’s Inception v3 Keras Model for NSFW detection (as explained above) and returns a detailed output about the classification.
!nsfwcache (cache ID)
Recalls an image classification output by its cache ID (as given in the footer of the !nsfwcheck command.
!nsfwthreshold [new threshold, or "-"]
While the classification scores given to an image cannot be tuned, each server can choose its own NSFW threshold (the sum of NSFW-related scores over which an image is considered NSFW).
The new threshold is an integer within the range
[0, 100], inclusive of
0 (treat all images as NSFW) and
100 (only treat an image as NSFW if the model recognize it as having no-SFW components - which is highly unlikely, hence basically meaning “treat no images as NSFW”).
Running the command with
- as argument will reset the server threshold to the global, default threshold of 60%.
Running the command with no arguments will show the current value for the server.
!nsfwthreshold 80 !nsfwthreshold - !nsfwthreshold
!shorturl (long URL)
Converts a long URL into a short URL using the proprietary gisl.eu shortening service.
URLs shortened using the gisl.eu service never expire, unless deleted by the person that created the short URL (feature not available yet). The original URLs are saved as encrypted strings within the redirection database. Any sensitive data contained in the URL (authentication keys, login info, etc.) will not be exposed in case of a breach.
|
OPCFW_CODE
|
Resolve circular import for Operation and PathItem
When Callbacks were added in #568 , it doesn't appear they were actually tested. Trying to generate a client for a minimal schema with a callback results in the following error:
openapi-python-client generate --path min_callback.json
Traceback (most recent call last):
File "/home/user/.local/bin/openapi-python-client", line 8, in <module>
sys.exit(app())
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.10/site-packages/openapi_python_client/cli.py", line 142, in generate
errors = create_new_client(
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.10/site-packages/openapi_python_client/__init__.py", line 338, in create_new_client
project = _get_project_for_url_or_path(
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.10/site-packages/openapi_python_client/__init__.py", line 311, in _get_project_for_url_or_path
openapi = GeneratorData.from_dict(data_dict, config=config)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.10/site-packages/openapi_python_client/parser/openapi.py", line 511, in from_dict
openapi = oai.OpenAPI.parse_obj(data)
File "pydantic/main.py", line 521, in pydantic.main.BaseModel.parse_obj
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1038, in pydantic.main.validate_model
File "pydantic/fields.py", line 859, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 994, in pydantic.fields.ModelField._validate_mapping_like
File "pydantic/fields.py", line 1067, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 857, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 1074, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 1121, in pydantic.fields.ModelField._apply_validators
File "pydantic/class_validators.py", line 313, in pydantic.class_validators._generic_validator_basic.lambda12
File "pydantic/main.py", line 686, in pydantic.main.BaseModel.validate
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1038, in pydantic.main.validate_model
File "pydantic/fields.py", line 857, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 1074, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 1121, in pydantic.fields.ModelField._apply_validators
File "pydantic/class_validators.py", line 313, in pydantic.class_validators._generic_validator_basic.lambda12
File "pydantic/main.py", line 686, in pydantic.main.BaseModel.validate
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1038, in pydantic.main.validate_model
File "pydantic/fields.py", line 859, in pydantic.fields.ModelField.validate
File "pydantic/fields.py", line 994, in pydantic.fields.ModelField._validate_mapping_like
File "pydantic/fields.py", line 1067, in pydantic.fields.ModelField._validate_singleton
File "pydantic/fields.py", line 833, in pydantic.fields.ModelField.validate
pydantic.errors.ConfigError: field "_callbacks" not yet prepared so type is still a ForwardRef, you might need to call Operation.update_forward_refs().
Schema used:
{
"openapi": "3.0.1",
"info": {
"title": "An API with Callback",
"version": "v1"
},
"paths": {
"/create": {
"post": {
"responses": {
"200": {
"description": "Success"
}
},
"callbacks": {
"event": {
"callback": {
"post": {
"responses": {
"200": {
"description": "Success"
}
}
}
}
}
}
}
}
}
}
This is due to a circular import dependency of Operation -> Callback -> PathItem -> Operation -> ...
This PR delays the imports in Operation and PathItem and uses the update_forward_refs() to update those references after each class is created.
I'm not very experienced with pydantic, so there may well be a better method for resolving this issue.
Thanks @dachucky ! Could you add a little callback example to end_to_end_tests/openapi.json just to verify that nothing else breaks in the full flow?
|
GITHUB_ARCHIVE
|
Hashing sensitive data and checking for duplicates
I have some sensitive client data that needs to be hashed, but I also need to check that that data isn’t duplicated by another client.
So the hash function needs to produce the same value for the same data so I can search the db for duplicates.
One option is brcrypt with a constant salt but that isn’t very secure.
Any ideas?
ps. we are hashing a short string that could be thought of as a password for the purposes of this.
That is a totally different question and now more unclear. Why do you store the password? There are tons of Q/A about using password hashing algorithms like SCrypt, Argon2id, Ballloon hashing for login systems. Do you want to derive keys etc...
Better if you change your question back to "files" for the benefit of others, and create a new question if you can't find the answer you're looking for. I would be very surprised if you can't find the answer regarding short strings.
@kelalaka the original question didn't mention files, I just added clarification that I'm asking about a string that is sensitive and could be thought of a password for the purposes of this question,
data is a generic term. if it is a password then dupe of this https://security.stackexchange.com/q/211/86735
Does this answer your question? How to securely hash passwords?
No. I need to hash a small string provided by a user in such a way that I can search for duplicates of that hashed string from other users.
"but that isn’t very secure" - why is that?
@schroeder I read somewhere that using a constant salt is not a good idea - not sure why but perhaps it makes it things easier to crack if you have access to lots of samples (like the database).
"Secure" for file hashing is very different to "Secure" for password hashing.
When password hashing, you usually have a "small" string, like "password123". When someone is trying to break the password, they go through small strings and get longer until they find a "collision". Bcrypt and other "slower" choices help to slow down brute-forced password breaking, by making the algorithm more memory/CPU bound with a linear chain of hashing cycles so that GPU optimization doesn't give a significant speed boost.
For file hashing, unlike short password strings, the files are relatively huge. There's no practical way that files could be "brute-forced" to finding a collision. So an algorithm like bcrypt doesn't add any meaningful security benefit.
Therefore, SHA (and even MD5) are "secure" for file hashing. I would tend to choose a hashing algorithm that's CPU/Memory efficient and outputs a large hash string to reduce random chances of a collision with another file. A recent edition of SHA hashing algorithm is probably the best choice.
You might also transmit the length of the file along with the hash for further reduction of risk of a collision, however, that might not be valid for your situation, where revealing a file size could be saying too much.
(Note: I assume the communication of the file hashes occurs over an encrypted transport like TLS)
Looking at the other answer from kelalaka, it's a great answer, but I don't agree with a couple of points, so to clarify:
1) I don't believe salt is necessary. That's for password hashing, and further slows down the possibility of creating a universal rainbow table. However, again, this is necessary because the password is so short.
2) I don't believe that any form of signing (HMAC) is necessary. For one, that makes hash comparison impossible. Usually a signature accompanies the file bytes, the verifier may hash the file themselves, then check the signature. But also, a hash is already secure enough to disquise the data in the file - that's what it does.
Thanks for your answer. I should have been clearer in my question. I need to hash a small string, a bit like a password, rather than a file. I've updated the question.
MD5 ?, SHA-1. What it the attacker somehow access the file and produce a new file with the same hash? I only said that using only one salt is not going to protect you if you are vulnerable to rainbow tables. Why do you think that HMAC makes the comparison impossible? The server has the key to create the HMAC not the user!. I suggested that if the input space is small. If small the attacker can search the space from unkeyed hash function in only they have no access to HMAC key.
Note: OP changed the question from file hashing to short-string hashing.
@kelalaka there is no known way to access a file and produce a new file with the same hash. You cannot create rainbow tables for files. "why do you think that HMAC makes the comparison impossible" - I never said that.
|
STACK_EXCHANGE
|
From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0) BuildID: 20020218 mozilla locks up when save prompts for overwriting existing file, both buttons cannot be used, dialog (including underlying) cannot be closed and other browser windows don't respond to mouse or keboard. However windows do redraw and can be raised or lowered. Mozilla has to be closed by killing the first instance of mozilla-bin. Reproducible: Always Steps to Reproduce: 1. goto a page to download something (eg. ftp://ftp.kernel.org/pub/linux/kernel/v2.4/testing/incr/) 2. choose a file to download (eg. patch-2.4.18-rc1-rc2.bz2) 3. choose locations where file already exists (eg. /usr/src) Actual Results: mozilla does not respond to user actions as descriped above Expected Results: continue to work: if I choose OK, download and overwrite file; if I choose cancel, let me specify other filename/path or if I want to wait let me use another browser window
Linux file picker issue; sending to Brian.
Some extra info: build 2002021906 has the same problem and it looks like it is related to displaying error/warning dialogs. Using the find dialog works, but when the search phrase cannot be found a warning dialog pops up and mozilla hangs in the same way as reported above when started from a terminal this is the last output before mozilla hangs: -- start -- Same node -- (0, 1) Will need to pull a new node: restart=0, frag len=1 Loop ... New node -- Resetting pointers. Deque size is 0 No more deque, looking in tree :::::::::::::::::::::::::: Got another node >>>> Node: NULL Iterator gave: >>>> Node: NULL Clear the deque! -- end (mozilla hangs) --
I don't see this with build 2002-02-20-08. Rudmer, what windowmanager are you using?
I'm using KDE 2.1.2
That would be the desktop environment, not the window manager....
just using KDE's default: rudmer@gandalf:~ # kwin --version Qt: 2.3.1 KDE: 2.1.2 KWin: 0.9
Can you reproduce this in a recent build ( or even 0.9.9 )? I think the 'Find in page' was fixed sometime ago. If it was caused by the same thing maybe it is fixed ...
wfm, as nobody can reproduce it. reopen if you can.
mass-verifying WorksForMe bugs. reopen only if this bug is still a problem with a *recent trunk build*. mail search string for bugspam: AchilleaMillefolium
|
OPCFW_CODE
|
Lift has broad support for localization at the page and element level.
8.1.1 Localizing Templates
The locale for the current request is calculated based on the function in LiftRules.localeCalculator
. By default, the function looks at the Locale in the HTTP request
. But you can change this function to look at the Locale for the current user by changing LiftRules.localeCalculator
When a template is requested, Lift’s TemplateFinder
looks for a template with the suffix _langCOUNTRY.html
, then _lang.html
, then .html
. So, if you’re loading /frog
and your Locale is enUS
, then Lift will look for /frog_enUS.html
, then /frog_en.html
, then /frog.html
. But if your Locale is Czech, then Lift would look for /frog_csCZ.html
, and /frog.html
. The same lookup mechanism is used for templates accessed via the Surround (See ↓
) and Embed (See ↓
) snippets. So, at the template level, Lift offers very flexible templating.
Note: Lift parses all templates in UTF-8. Please make sure your text editor is set to UTF-8 encoding.
8.1.2 Resource Lookup
Lift uses the following mechanism to look up resources. Localized resources are stored in template files along-side your HTML pages. The same parser is used to load resources and the pages themselves. A global set of resources is searched for in the following files: /_resources.html, /templates-hidden/_resources.html, and /resources-hidden/_resources.html. Keep in mind that Lift will look for the _resources file using the suffixes based on the Locale.
The resource file should be in the following format:
In addition to global resource files, there are per-page resource files (based on the current Req.) If you are currently requesting page /foo/bar, the following resource files will also be consulted: /foo/_resources_bar.html, /templates-hidden/foo/_resources_bar.html, and /foo/resources-hidden/_resources_bar.html (and all Locale-specific suffixes.) You can choose to create a separate resource file for each locale, or lump multiple locales into the _resources_bar.html file itself using the following format:
<res name="hello" lang="en" default="true">Hello</res>
<res name="hello" lang="en" country="US">Howdy, dude!</res>
<res name="hello" lang="it">Benvenuto</res>
<res name="thank.you" lang="en" default="true">Thank You</res>
<res name="thank.you" lang="it">Grazie</res>
<res name="locale" lang="en" default="true">Locale</res>
<res name="locale" lang="it">Località</res>
<res name="change" lang="en" default="true">Change</res>
<res name="change" lang="it">Cambia</res>
8.1.3 Accessing Resources
Lift makes it easy to access resources.
From snippets: <span class="lift:Loc.hello">This Hello will be replaced if possible</span> Note that the value after the . in the snippet invocation is used to look up the resource name.
S.loc("hello") - return a Box[NodeSeq] containing the localized value for the resource named “hello”.
S.??("Hello World") - look for a resource named “Hello World” and return the String value for that resource. If the resource is not found, return “Hello World”.
Lift offers a broad range of mechanisms for localizing your application on a page-by-page and resource-by-resource by-resource basis.
(C) 2012 David Pollak
|
OPCFW_CODE
|
On 11-11-2014 2:51, Bryce Harrington wrote:
On Mon, Nov 10, 2014 at 10:21:07AM +0100, Tavmjong Bah wrote:
> On Sun, 2014-11-09 at 23:23 -0500, Martin Owens wrote:
>> On Sun, 2014-11-09 at 23:06 +0100, Johan Engelen wrote:
>>> We had a bit of a discussion on IRC about this. For simplicity, I
>>> think it'd be good if we keep this a developer education campaign,
>>> i.e. C++ books. And then have another something for the non-coders. We
>>> of course appreciate those efforts too, I just don't know how to
>>> "rank" them and what to give.
>> So is inkscape-web's python code C++ or non-code? ;-)
>> I wouldn't want Maren to miss out in 2015 or 16 if we run this every
>> year. These contributions aren't core inkscape, or event extensions. But
>> they are code and they are for inkscape the project.
> Maybe we should send our top Python coders a Python book.
> How many Python coders do we have?
The right question might be, how many people do we have that would like
to learn Python. ;-)
I could imagine a highly active translator or C++ developer might want
to improve their Python and select the book, even though there Inkscape
contribs were not to Python. And I can imagine python developers might
have interest in boning up on C++.
So, maybe we want to have a pool of books to choose from of various
topics, then pick the top contributors in various areas, and let them
decide themselves what they want to learn.
I'm not sure what Python books to recommend putting on the list. It
seems there's so much good info for free on the web, I haven't ever
needed a python book. But there's a Python Cookbook that apparently
illustrates use of Python3 idioms. A Django/Python book might be of
interest among those wanting to work on the website.
OK, I think my idea of restricting the book choice is not needed.
I will add a pointer to the StackOverflow list as a suggestion, but if
someone wants another book, fine.
I asked Conservancy if there are limits to what we can give people, and
I'll take the answer to that to limit what people can choose.
Perhaps we can add the restriction that the item has to come from online
store X (probably Amazon), so that the person ordering the gifts has an
|
OPCFW_CODE
|
Version 3.8 of SVUnit, just released, improves support and usability for people unit testing UVM components.
XtremeEDA, are responsible for adding and testing the version 3.8 features. That makes Colleen and Dave the newest active contributors to SVUnit!But before we talk about new features, I want to mention that the cool part of this release… for me anyway… is that I didn’t have to do anything! Colleen Piercey and Dave Read, colleagues of mine from
As far as new 3.8 features, people can now use create_unit_test.pl to generate a UVM specific test case template. The template gives test writers placeholders for connectivity in an auto-generated UUT wrapper. It also inserts file includes, package imports and required function calls in the setup and teardown to avoid people having to do it themselves. Importantly, the new UVM test case template compiles and runs with UVM as-generated so you start writing tests from a known good state.
To create the new test template, Colleen and Dave added a ‘-uvm’ switch to the create_unit_test.pl. So, for example, if you’re writing unit tests for a UVM component called blah, you would generate a test case template with…
>create_unit_test.pl -uvm blah.sv
That’ll give you a test case template in blah_unit_test.sv. At the top of the test case template, you’ll see a wrapper defined for blah called blah_uvm_wrapper. The wrapper looks like this…
The connect phase in that wrapper becomes the right place for any light unit test infrastructure that’s missing from the default test case template, if required. For me, this has been stuff like TLM FIFOs or stub connections to analysis ports that your UVM component would typically interact with in a real testbench arrangement. Of course the wrapper gives you the opportunity to also add any for-test-only logic that helps you isolate and test behaviour in blah. If you have no connectivity or extra considerations, there’s nothing required in that wrapper; it’s just there for convenience.
As for the other boilerplate code that’s included in the UVM test case template, you’ll see most of that in the build, setup and teardown tasks (note the activate/deactivate and test_start/test_finish function calls)…
Having to insert all those function calls was annoying the way I had it in the default test case template, admittedly, so I’m glad the new template takes care of them.
Thanks again to Colleen and Dave for the new UVM support in 3.8! Makes my life easier. Same for anyone else using SVUnit to unit test UVM components :).
PS: Heads-up that Dave is currently working on additional support for UVM1.2. For teams using UVM1.2, Dave should have that figured out in the next couple weeks at which time we’ll do a version 3.9.
|
OPCFW_CODE
|
If an A/C have Radio failure ( sqk 7600 ) How the ATC will give him instruction or how will he land?
If an A/C have Radio failure ( sqk 7600 )
How the ATC will give him instruction or how will he land ?!
This is a question with many answers, depending on the conditions of the flight. Let me limit myself to private recreational aviation (what the FAA calls Part 91) in a single-engined propeller aircraft.
If you are flying Visual Flight Rules (VFR), there are some airspaces you need to stay out of. For instance, US and Canadian Class B and C airspaces require radio communication. You won't be able to land at airports in such airspaces without some other arrangements.
You can give a signal that you are lacking radio communication. Squawking 7600 on your transponder is one way. Flying a triangular pattern, which might be visible to ATC on radar, is another way. I have sometimes used my cell phone to call ATC's phone number.
Control towers in the US and Canada might well have "light guns": bright spotlights which the controller can point at an aircraft, and colour red or green, and shine continuous or blinking light. A continuous green light from tower to an aircraft preparing to land means that aircraft is cleared to land.
If you are flying Instrument Flight Rules (IFR), there are specific procedures to follow if you are unable to communicate with ATC. Frequently they boil down to holding for a few minutes, then continuing on your flight planned route.
Remember that some aircraft have no electrical system, let alone radios. They fly, take off, and land just fine. There are many airfields with no control towers. There is uncontrolled airspace in which to fly. In these cases, pilots maintain safety by seeing and avoiding other aircraft. Pilots fly traffic patterns before landing, and sequence themselves cooperatively. Lots of aviation works just fine without radio communication between pilots and ATC.
This also depends on the nature of the failure: If you can still receive but not transmit ATC will frequently ask you to acknowledge instructions by using the IDENT function on your transponder. This lets them know that you can still receive their instructions and they can direct you someplace where you won't be a problem.
Many thanks for you .
Good answer - I think the IFR comment is the relevant one here - you squawk 7600 for long enough for the ATC to notice you and get everyone out of your way... then you continue as planned and land, trusting ATC to deal with things. That's essentially the point of the 7600 code - to flat your situation up to ATC and let them deal with it. If you declared an emergency then they'd clear traffic out of your way anyway, and "We don't know where we're meant to be!" is somewhat urgent...
|
STACK_EXCHANGE
|
How does windows know the file source when I am performing the file paste operation?
After we copy a file, we can wait for long before pasting it to the destination.
I have googled but can't figure out where windows store the information of the source file.
I don't think it's the clipboard who is responsible for data exchange
It's the clipboard. There's quite a number of different pieces of data (clipboard objects) that can be stored in a single clipboard entry, different pieces of data describing the same entry.
The most common approach is to store a pointer to the file, usually the file's path. This is usually in the CF_HDROP format. The program performing the paste operation needs to support this format, and can go and read the file that it's pointing to. This is what Windows Explorer does.
It's also possible to store an entire file's data in the clipboard, which is later pasted out of it. This is usually used for transient and small files, and is rather inefficient. Outlook is one such application, when you copy (or drag-drop) an email. This is not supported by nearly as many applications as the pointer approach is; for example, it's not possible to paste or drop these into most web browsers.
...& this is the reason cut/paste can be so dangerous in Windows. Change the buffer, lose the file.
@Tetsujin If you're cutting from Explorer, it simply stores a reference. The source file is not removed until after the paste operation is complete. It's modeled as a copy => paste => delete (or a move, if on the same volume). There's nothing dangerous about it.
Google 'cut paste lost files'.
@Tetsujin As usual, nothing of substance. Data loss from interrupting the process (by unplugging devices, power loss, etc.) has nothing to do with the transfer method but rather how write caches in both the OS and on the disk (hardware/firmware) work. Given that "cut" literally does nothing more than add a file path to the clipboard and mark the file, there is no way for "cut" to be dangerous. The file transfer operation itself (which is outside the realm of cutting/clipboard and can be achieved by, e.g., right click => move to) is a different matter.
OK. I shall continue to never trust it. macOS doesn't even have a cut for files in Finder, you can only invoke 'cut' at paste time.
@Tetsujin The only difference between cut and copy is cut marks the file, which tells the paste/transfer operation (a) that it can be a move (if on same volume) and (b) to delete it later. Everything else is functionally the same at cut/copy time. At paste time the operation may be different. (side note, this cut vs copy marker is possibly transmitted in the clipboard as one of the alternate objects, but distinct from the primary CF_HDROP and must be interpreted by the paste operation. it's possible for a non-Explorer paste to ignore the 'cut' marker and copy the file without deletion)
|
STACK_EXCHANGE
|
Microsoft appears set on getting into the social space, whether by owning it or facilitating it. It’s kind of like “let someone else build it and if they come we’ll go get them and invite them over.” Now it appears they are going for the Mall approach, rather than the franchise or leveraged buyout approach. Or at least, so it seems.
In a prior post, we noted Google’s opening the cross-platform communications mode with OpenSocial, and the many developers working on an aggregator for users. Could this latest venture serve as an aggregator not just for individual profiles, but also one for groups? We are still looking for a mobile solution, too . . . waiting to be invited to participate in the mashup of Dashwire and ProfileLinker!
Microsoft is working with Facebook, Bebo, Hi5, Tagged and LinkedIn to create a safe, secure “two-way street” so we can move our profiles and relationships between social networking sites. It’s a little late for that, isn’t it? How ’bout something that will synchronize what we have, or maybe even a business and personal profile, with by-individual or by-group access? We’ve already copy-pasted our “About Me” and a variety of likes and quotes and . . . What happened to the Open Social adventure that Facebook was avoiding making a commitment to?
Microsoft has been using SharePoint, with support for wikis, blogs and RSS feeds, with privacy and security so everyone can feel secure, for enterprise social networking, but now they are going after those who aren’t connected by their internal company relationships. And they are proposing that we help them by using Windows Live Messenger to connect with Facebook (available now), Bebo, LinkedIn, Hi5 and Tagged (coming soon). The strategy starts with inviting your friends and connections to connect on Windows Live Messenger (not sounding a lot like portability here — I am thinking “import from”).
So I tried the only currently available option — Facebook. A login to Facebook screen (with Windows Live logo but a Facebook URL) popped up, and the first try on login failed (hmmm, a phishing site?). But the next screen had the Facebook logo, and it logged me in just fine. I didn’t however, see where I could add anyone to an invite list, so . . . I gave up and started blogging.
I was using MS Internet Explorer on XP on a Dell, so maybe that’s what the problem was. Next time I find myself with nothing to do but beta-test for Microsoft, perhaps I will try Firefox on Leopard on a Mac.
I’m not sure that this will be a profitable venture for Microsoft, but it’s worth a try. We know that owning a centrally located piece of real estate and inviting big names to stake their claim there has worked in the real world in the past. Microsoft has shown their ability in Web 1.0 to make money, and it’s apparent that no one in social networking has figured out how to do that yet . . .
So we’ll just keep beta testing while Microsoft keeps building . . .
Note that when I recently installed FriendFeed and Twitter on Facebook, it went off without a hitch. They obviously aren’t related to Microsoft.
What do you think?
|
OPCFW_CODE
|
View Full Version : Please help about dhcp with changing the MAC value
24th May 2007, 08:11 AM
I got an Internet Line from an ISP named Zip which provide serius broadband networking with MAC register.
They give me a address like 0002449E1CE0 which does not match with my NIC. Now I've to change the value of my NIC into the given address. I've also use the dhcp on my NIC.
So how can I do it.
I'm usig Fedora Core 4. And I've turned my NIC into dhcp. but I cann't change the valu of NIC and also unable to access the Internet.
24th May 2007, 08:34 AM
FC4 was a while ago but on the network gui manager there is
The tab that shows the computer name and ip.
Double click that and it opens a page with many items.
About 2/3 down the page is a place to enter the MAC address.
There is some check boxes that control things like
Let / don't let me EDIT the MAC address.
Probe for the nic's MAc
and so forth.
For dhcp you need your ISP gateway or your router's gateway ip
and maybe the DNS server ip's if the gateway is not enough.
Look around on the tabs.
24th May 2007, 08:46 AM
Yes I've checked it out but it did not help.
And for dhcp you know I'll just request for the Ip, gateway, DNS server name and so on.
So why should I fix it in my NIC. I've to change my NIC value. In Network gui if I change the value and after probing it goes its original value. please help.
24th May 2007, 08:53 AM
After you edit the mac , don't probe it.
And make sure the check boxes are checked accordingly.
Why not install the Unity re-spin DVD for FC6
Updated thru Feb, and installs and run several factors better than FC4.
24th May 2007, 10:09 AM
I think your probably looking at this the wrong way. If your ISP is filtering who can use the link via MAC address filtering then I would be looking at your gateway (adsl or cable modem) since all outgoing packets will be coming from the MAC address of that not your internal NIC. There is no point spoofing your MAC address on your NIC if they are just going to appear to be from a different MAC address
24th May 2007, 11:50 AM
Thanks everybody who replied. My problem is solved. I've just changed my NIC value by editing
/etc/sysconfig/network-scripts/ifcfg-eth0, where I wrote down ,
HWADDR=the given addrss by ISP like 00:12:e3:E0:1c
And Its working now.
vBulletin® v3.8.7, Copyright ©2000-2017, vBulletin Solutions, Inc.
|
OPCFW_CODE
|
Decommision older windows systems
Our Windows builds are now all running on Windows Server 2022 systems. Now that 2012 as reached the end of extended support we should decommission those and, if required, replace with later versions.
We should also aim to replace the one Server 2016 machine that we have with a Server 2019 system.
It may make sense to try replacing some of the old server 2012 boxes with Windows 11 systems to cover testing on there too, but that should be evaluated to ensure that the playbooks run on that platform and we dont' get any unexpected issues.
So a phased approach to the machine we no longer want:
[x] Mark them offline in jenkins
[x] Run AQA tests on the TC win11 systems in preparation to see if Windows11 is a viable test platform
[ ] Switch them off
[x] Create and activate replacement systems in jenkins
[ ] Delete old ones at the providers
All Windows TCK machines except one are Windows 11 as per https://ci.eclipse.org/temurin-compliance/computer/
Currently Running 32 bit tests on Win2022, with WSL disabled
test-azure-win2022-x64-1 has been commisioned following successful testing.
test-azure-win2022-x64-2 has been created and is in jenkins : https://ci.adoptium.net/computer/test%2Dazure%2Dwin2022%2Dx64%2D2/
Testing of the new node in progress...
test-azure-win2022-x64-2 has been commisioned following successful testing.
Powered off:
test-azure-win2012r2-x64-1 & test-azure-win2012r2-x64-3
@AdamBrousseau We have four windows server 2012r2 machines defined int eh IBM Cloud - am I right in saying that we can't update / reprovision as 2022 systems ourselves so you would have to manage that for us?
I right in saying that we can't update / reprovision as 2022 systems ourselves so you would have to manage that for us?
Correct, I can delete the old ones and provision the new ones for you (not necessarily in that order). Possibly based on a snapshot of an existing one, if that is of interest. It is on my radar as I have to do the ones for the OpenJ9 project as well. I just don't know when that will be as nobody is pushing hard on me for it yet.
Correct, I can delete the old ones and provision the new ones for you (not necessarily in that order). Possibly based on a snapshot of an existing one, if that is of interest. It is on my radar as I have to do the ones for the OpenJ9 project as well. I just don't know when that will be as nobody is pushing hard on me for it yet.
From my perspective we've now shut off all of the win2012 ones from active service so it can be switched over any time. Since this means we're down to only one provider for Windows systems at the moment (Azure) it would be good to get them replaced and verified before the next quarterly release in January. I don't think we have a suitable system to clone at the moment, but it's probably just as easy to give us clean ones and let us fire the Ansible AWX server at them. I'll let @steelhead31 object to that if he wishes ;-)
Sounds like a plan to me.. :)
The IP address <IP_ADDRESS> (consistent with https://github.com/adoptium/infrastructure/blob/b728c86a1b2fe798c29cae85f7b23e50ff9686fa/ansible/inventory.yml#L45) is repeatedly (once per minute) trying to access it's slave-agent.jnlp in jenkins and receiving a 404. We should stop this or (most likely) fully decommission the machine now.
I missed the last few comments. Is my understanding correct that I can shutdown all your windows build and test systems that are running win2012? @sxa
I'm 99% certain that'll be fine but I suggest we hold off to next week when @steelhead31 is back if that's ok but I believe the intention was to replace them all with something else (potentially a mix of Win2022 and RHEL? Can't recall where we discussed that)
Likely discussed over slack but I thought the plan was to do 1-1 replacement with win2022's. I'm open to discussion.
Friendly bump @steelhead31
I think that's reasonable (and I guess this explains why Nagios is showing
formals against the machines!) Will be interesting to see the timings for
Windows systems coming out of my dry ribs tonight for all for releases and
32+64 bit.
For RHEL we should have at least one RHEL9 now I think and the 8 or 9. Will
be good to have more systems using podman by default (although the test
jobs may require some updates - there's an aqa-tests issue about that)
On Tue, 9 Apr 2024, 21:01 Scott Fryer, @.***> wrote:
@AdamBrousseau https://github.com/AdamBrousseau , yes please, lets
shutdown the 2012 machines ( a total of 4 I believe )
Build :
win2012r2-x64-1: {ip: <IP_ADDRESS>}
win2012r2-x64-2: {ip: <IP_ADDRESS>}
Test :
win2012r2-x64-1: {ip: <IP_ADDRESS>}
win2012r2-x64-2: {ip: <IP_ADDRESS>}
And if @sxa https://github.com/sxa agrees
I think replacing them with a mix 2 x Win-2022 ( 1 x Build, 1 x Test ) and
2 x RHEL ( 1 x Build & 1 x Test ) would be ideal.
—
Reply to this email directly, view it on GitHub
https://github.com/adoptium/infrastructure/issues/3238#issuecomment-2045959060,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/APDJLOERUW7FDGQUEB3N52DY4RCLLAVCNFSM6AAAAAA62YGMSSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANBVHE2TSMBWGA
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
Spotted today: The build-azure-win2012-x64-2 machine (as opposed to 2022) is still live and should be decommissioned and removed from the inventory. Also we have some cases where the machine's hostname does not match the name in the inventory/jenkins (May not be easy to change, but just worth being aware of)
Noting also that the win2022 build machines still have the ci.role.test label which we should look at removing to ensure the build machines are dedicated as far as possible (I've just had a build fail to be scheduled because of it) ... Although looking at that list we do seem horribly low on Windows test systems just now ...
I've stopped (2016 - <IP_ADDRESS> ) and (2019 - <IP_ADDRESS> ) both test machines in the public inventory.
However there are 2 other windows machines... ( 2016 - <IP_ADDRESS> , 2019 - <IP_ADDRESS> ) still running, that arent in either inventory too.. anybody know what these are for ?
These 2 windows machines have now been shutdown ( 2016 - <IP_ADDRESS> , 2019 - <IP_ADDRESS> ) to potentially identify any issues.
I think we're done with this now aren't we @steelhead31? All the old ones have now gone as per our recent tracking spreadsheet.
Yes, all old windows machines have been decomissioned.
|
GITHUB_ARCHIVE
|
Action of charge conjugation on bispinors
I'm following an introductory course to particle physics. We have introduced Klein-Gordon's equation for spinless particles and Dirac's equation for spin $1/2$ particles. Klein-Gordon's equation works with the usual wavefunctions $\Psi(\vec r)$. Dirac's equation works with bispinors meaning it describes the behaviour of a particle-antiparticle pair with four wavefunctions: the first two relating to the particle $a$ and the remaining two describing the antiparticle $\bar a$.
$$
\Psi_{a\bar a}(\vec r_1, \vec r_2) = \begin{pmatrix}
\Psi_{a\uparrow}(\vec r_1) \\
\Psi_{a\downarrow}(\vec r_1) \\
\Psi_{\bar a\uparrow}(\vec r_2) \\
\Psi_{\bar a\downarrow}(\vec r_2) \\
\end{pmatrix}
$$
Following Martin Shaw's book (4th edition, section 1.3.2), the professor gave a definition of the charge conjugation operator $\hat C$ using Dirac's ket notation. We said that if $a = \bar a$ while $b \neq \bar b$ then
$$
\begin{aligned}
&\hat C \vert{a}\rangle = \vert{\bar a}\rangle = \pm \vert{a}\rangle, \\
&\hat C \vert{b}\rangle = \vert{\bar b}\rangle, \\
&\hat C \vert{b\bar b}\rangle = \pm\vert{b\bar b}\rangle. \\
\end{aligned}
$$
This is a rather unclear definition of $\hat C$ since we have not mathematically specified the meaning of this kets.
The question: Is it possible to define the action of $\hat C$ on individual wavefunctions for spin zero particles? Can we do the same with bispinors in the context of spin 1/2 particles?
My guess: I'm guessing
$$
\hat C\begin{pmatrix}
\Psi_{a\uparrow}(\vec r_1) \\
\Psi_{a\downarrow}(\vec r_1) \\
\Psi_{\bar a\uparrow}(\vec r_2) \\
\Psi_{\bar a\downarrow}(\vec r_2) \\
\end{pmatrix} = \begin{pmatrix}
\Psi_{\bar a\uparrow}(\vec r_1) \\
\Psi_{\bar a\downarrow}(\vec r_1) \\
\Psi_{a\uparrow}(\vec r_2) \\
\Psi_{a\downarrow}(\vec r_2) \\
\end{pmatrix}
$$
but I'm not sure.
|
STACK_EXCHANGE
|
Table of ContentsToggle
TestComplete 15.61 + License Key Free Download 2024
One of its standout features is its robust object recognition capability. TestComplete uses a combination of properties and methods to accurately identify and interact with UI elements across different environments. This enhances test reliability and stability, crucial for automation. TestComplete supports keyword-driven testing, enabling testers to create tests using simple keywords that represent various actions and verifications. This approach simplifies test creation and maintenance, abstracting technical complexities.
TestComplete integrates seamlessly with various third-party tools and systems, including CI/CD pipelines like Jenkins or TeamCity. This allows for continuous testing and integration within development workflows. Additionally, it enables collaboration among team members by facilitating shared test repositories and resources. It generates detailed reports with comprehensive insights into test runs, failures, and execution logs. These reports are crucial for identifying issues, tracking progress, and communicating results across teams and stakeholders effectively.
TestComplete + Activation Key
TestComplete + Activation Key supports parallel testing, allowing multiple tests to run simultaneously. This significantly reduces test execution time, enhancing efficiency, especially for larger test suites. Its version control capabilities streamline test maintenance by enabling testers to manage different test versions effectively. This ensures traceability and accountability throughout the testing process.Certainly, diving deeper into TestComplete, here are additional aspects and functionalities that make it a comprehensive and powerful automated testing tool.This dedication to improvement ensures that the tool stays relevant and competitive in the ever-evolving landscape of software development and testing.
TestComplete boasts an active user community and provides extensive documentation, tutorials, and support resources. This support ecosystem is invaluable for users seeking guidance, troubleshooting, or looking to expand their knowledge base. TestComplete stands out as a robust and user-friendly automated testing solution, catering to the diverse needs of software testers across industries and domains. Its array of features empowers testers to create, execute, and maintain tests efficiently, contributing significantly to the quality assurance process.
TestComplete encourages a modular approach to test creation, allowing users to build reusable components and modules. This modularity enhances test scalability and maintainability, enabling efficient updates and modifications without impacting the entire test suite.Its Test Visualizer feature captures screenshots or videos during test execution. This visual documentation assists in understanding test failures, making it easier for testers to identify issues within the application’s UI.
TestComplete + Serial Key
TestComplete + Serial Key supports data-driven testing, enabling users to parameterize tests and execute them with different datasets. This capability enhances test coverage and allows for thorough validation under various scenarios and inputs. Beyond functional testing, TestComplete integrates performance testing capabilities through tools like LoadComplete. This allows for a comprehensive approach, encompassing both functional and load testing within a unified environment. SmartBear consistently updates TestComplete, introducing new features and enhancements based on user feedback and industry trends.
TestComplete caters to a wide range of applications, supporting technologies like .NET, Java, HTML5, and more. Additionally, it adapts to various environments, whether it’s testing on local machines, virtual machines, or cloud-based infrastructure.The SmartBear Community serves as a valuable resource for TestComplete users. It offers forums, knowledge bases, and discussions where users can seek advice, share experiences, and learn from peers. Additionally, SmartBear provides training courses and certifications to help users maximize their proficiency with the tool.
TestComplete allows users to customize test execution by defining specific configurations, environments, or parameters for running tests. This flexibility accommodates diverse testing requirements and scenarios. It includes features that aid in accessibility testing, ensuring applications comply with accessibility standards like WCAG (Web Content Accessibility Guidelines). This is crucial for applications aiming for inclusivity and compliance with accessibility regulations.
- It allows testing across various platforms including web, mobile, and desktop applications, ensuring comprehensive coverage across different environments.
- Robust object recognition capabilities facilitate precise identification and interaction with UI elements, enhancing test accuracy and reliability.
- Its keyword-driven approach simplifies test creation by using keywords to represent actions and verifications, abstracting technical complexities for easier test maintenance.
- Seamless integration with CI/CD tools like Jenkins, TeamCity, and version control systems enhances continuous testing and collaboration within development pipelines.
- The ability to run multiple tests simultaneously significantly reduces test execution time, improving overall testing efficiency.
- Detailed reports provide comprehensive insights into test runs, failures, and execution logs, aiding in issue identification and progress tracking.
- Encourages modular test creation, facilitating reusability, scalability, and ease of maintenance across the test suite.
- Supports parameterization of tests with different datasets, enabling thorough validation under various scenarios and inputs.
- Seamlessly integrates performance testing capabilities, allowing for a holistic approach covering both functional and load testing.
- Accommodates testing of various applications and environments, supporting technologies like .NET, Java, HTML5, and more.
- Offers an active user community, extensive documentation, forums, and training resources to aid users in learning, troubleshooting, and expanding their proficiency with the tool.
- Faster them the previous version.
- Minor bugs were solved for the best performance.
- Windows 7/8/10 64bit and Mac OS.
- Processor:0 GHz.
- Display: 1024 X 786.
- Disk Space: 4 GB.
- 1 MB VRAM
How To Install?
- Download the most recent version of TestComplete first.
- If you are still using the prior version of this software, remove it.
- Switch off the internet and virus protection software.
- Next, install this software normally without running the setup.
- Run the setup after extracting the files here.
- Once the setup has completed, shut it down completely.
- Copy and paste the G file into the installation folder after opening it.
- Next, turn on this software.
|
OPCFW_CODE
|
Intermediate Remote Sensing Scientist / Engineer
This job is no longer active.
View similar jobs.
POST DATE 5/22/2020
END DATE 6/19/2020
Frontier Technology Inc.
Stony Brook, NY
Frontier Technology, Inc. is seeking a Physical Scientist to support existing contracts in the Beverly, MA area. Frontier Technology, Inc. s Sensor and Data Services group is a collaborative team of scientists working with space-based imaging and non-imaging EO-IR sensors. We have extensive experience in sensor requirements definition, sensor design, sensor calibration, data analysis, radiometric performance validation, physics-based and phenomenological simulation tools, and data management.
As a physical scientist, you will work as part of a team of scientific analysts and software developers to develop novel approaches to object identification and tracking in image data, as well as inferring object characteristics or automatically identifying anomalies in telemetry data. You will be responsible for overseeing the development of these novel approaches end-to-end, from conception, to prototyping and testing approaches, to reporting on results and guiding the development of deployable software. You will be working in a team that is responsible for sensor calibration, simulation of real sensor systems, systems engineering, and automation of data management. Additional responsibilities include:
* Developing algorithms associated with image processing, coordinate transformation, sensor calibration, data simulation and modeling, astrometry, and software toolchain automation.
* Perform analysis of sensor data, simulated systems and algorithms.
* Work with other scientists on the team collaboratively to complete algorithm development or analysis tasks.
* Interface with software developers on the team to define software requirements and guide software implementation.
* Interface with customers to brief work and define tasking.
Required Education and Experience:
* PhD in Astronomy, Physics, Applied Mathematics, or a related discipline.
* Practical knowledge of telescopes and optical systems, including observation planning or mission design.
* Strong technical background in astronomy or remote sensing.
* Experience with standard data analysis and reduction procedures.
* Strong foundation in mathematics. Experience with the application of statistical concepts and techniques to data preferred.
* Scientific programming competency in Python or MATLAB
* Excellent communication skills, with emphasis on the ability to communicate technical results to non-technical audiences (such as customers).
* Must be a U.S. Citizen and able to obtain/maintain a U.S. government security clearance.
* This position will require minimal travel.
* Optical signatures modeling.
* EO/IR detector science.
* Machine learning or statistical inference.
* Multi-scale modeling, first principles and phenomenological simulation of physical systems.
* Technical work on a sensor program.
FTI is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class..
|
OPCFW_CODE
|
We have been using LumenRT for a while now, but with Revit 2019 now being used more widely, when can we expect the plugin to be updated/released?
It's in LumenRT Update 12 - out later this month
LumenRT Update 12 at C:\Program Files\Bentley\Bentley LumenRT CONNECT Edition - Update 12\LumenRT\Export\Revit doesnt have any dll file for Revit 2019.
You may be right about this. I think with an installer, the issue goes away.
Here is the production version of LumenRT Update 13. Can you try installing this and see if the Revit export works:
Any new developments on this. Tevit 2019 was loaded first and the I have loaded OpenBuildings w/ LumenRT v12 and the Connection Client has run the most recent update to LumentRT v13 and The add in still doen't show in Revit 2019. Thoughts?
rv_LumenRT2019.dll can be found in the following folder:
C:\Program Files\Bentley\Bentley LumenRT CONNECT Edition - Update 13\LumenRT\Export\Revit
However, I don't believe the DLL will not be loaded into Revit without a .addin file to act as a manifest file. The .addin file is written in XML and looks something like:
<?xml version="1.0" encoding="utf-8"?><RevitAddIns> <AddIn Type="Application"> <Name>Model Review</Name> <Assembly>C:\Program Files\Autodesk\Revit Model Review 2019\ModelReview.dll</Assembly> <AddInId>8e406bfc-b416-4ecb-b639-c290ca3181f2</AddInId> <FullClassName>BIMStandardsManager.ExternalApp</FullClassName> <VendorId>ADSK</VendorId> <VendorDescription>Autodesk, subscription.autodesk.com</VendorDescription> </AddIn></RevitAddIns>
OK, with a bit of reverse engineering, generating a GUID, and guess work on the namespaces in the rv_LumenRT2019.dll using Visual Studio to sleuth them.., my son and I created a working manifest file. the following text can be added/pasted into a .addin file to create a working manifest:
<?xml version="1.0" encoding="utf-8"?><RevitAddIns> <AddIn Type="Application"> <Name>Model Review</Name> <Assembly>C:\Program Files\Bentley\Bentley LumenRT CONNECT Edition - Update 13\LumenRT\Export\Revit\rv_LumenRT2019.dll</Assembly> <AddInId>d452173f-aad5-4062-8823-318474266d45</AddInId> <FullClassName>LumenRT.LumenRTApp</FullClassName> <VendorId>BSI</VendorId> <VendorDescription>Bentley Systems, Incorporated</VendorDescription> </AddIn></RevitAddIns>
I named my file LumenRTExport.addin and placed it in the following folder:
I should add one more thing...
The AddInId (GUID) is specific to the DLL. It is used to register each unique DLL in the Windows Registry. So if someone wanted to install the plugin for multiple version (years) of Revit, then each Bentley DLL for each Revit version would require a unique AddInId (GUID). To generate a unique GUID is quite easy using Windows PowerShell. The syntax is as follows:
|
OPCFW_CODE
|
This is a massively less ambitious idea, and doesn't solve the original problem, but:
I'd like to see a scope for names that are "obviously ;-)" meant to be short lived. At the moment that's the for loop "counters" and those created by context managers:
for i in something: # use i as usual more_code # now i is not defined
This would be a lot like comprehensions, and for the same reasons (though less compelling)
with something as a_name: # use a_name here more_code # a_name is not defined.
This just plain makes "natural" sense to me -- a context manager creates, well, a context. Various cleanup is done at the end of the block. The variable was created specifically for that context. It's really pretty odd that it hangs around after the context manager is done -- deleting that name would seem a natural part of the cleanup a context manager does. This one seems more compelling than the for loop example.
Consider probably the most commonly used context manager:
with open(a_path, 'w') as output_file: output_file.write(some-stuff) # whatever else
now we have a closed, useless, file object hanging around -- all keeping that name around does is keep that file object from being freed.
The challenge, of course, is that this would be a backward incompatible change. I don't think it would break much code (though a lot more for for than with) but maybe it could be enabled with a __future__ import.
In any case, I would much rather have to opt-in to keep that variable around than opt out with an extra let or something.
On Sun, Nov 29, 2020 at 3:59 PM Greg Ewing firstname.lastname@example.org wrote:
On 29/11/20 11:02 pm, Paul Sokolovsky wrote:
It will be much more obvious if there's a general (standalone) "const",
I don't think it will. There's nothing about the problem that points towards constness as a solution, so it doesn't matter how many other places in the language "const" appears.
And even if you're told about it, you need two or three steps of reasoning to understand *why* it solves the problem.
that's why I'm saying we can't really consider "for const" without just "const"
I agree with that.
And it's "pretty obvious" to someone who considered various choices and saw pieces falling into their places. Also might be pretty obvious for someone who used other languages.
I strongly suspect it's something that's obvious only in hindsight.
-- Greg _______________________________________________ Python-ideas mailing list -- email@example.com To unsubscribe send an email to firstname.lastname@example.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://email@example.com/message/KKMR7Z... Code of Conduct: http://python.org/psf/codeofconduct/
|
OPCFW_CODE
|
Some time ago, I applied for a job where I had to implement a tool to schedule power on and power off on a computer.
The target was to have a file where to define a weekly schedule. There also should be a software to parse that file and act in consequence. It was also needed to add exceptions on certain days, specified by month and month day, where the schedule has to be different from the matching day of week. So the file should look something like this:
[week] Monday = 8.30-20.00 Tuesday = 8.30-13.15 Wednesday = 8.30- Thursday = -20.00 Friday = 8.30-19.00 Saturday = Sunday = 8.30-20.00 [exceptions] Nov_25 = 8.30-22.00
Sadly, the company that was hiring me tricked me with the price so we never achieved an agreement (or at least that's what I deduced from having no answer to my mails). However, as they needed it urgently, I was implementing the software during the negotiations so that I had finished my job when thy stopped answering me. So, as I thought it is an interesting software and there are many people that unknown how to do schedule a power on, I decided to post it in my blog.
Let's get to the point. This tool was coded in python and it has 3 pieces:
powerscheduler.py implements the main program that uses the rest of classes.
schedulereader.py implements the configuration file parsing.
necromancer.py implements the set up of power on and power off schedules.
As you can see, all the interesting magic will be in necromancer.py file, so let's see how we can schedule the killing of our computer and then get it back from the Avernus.
First we'll see the easy thing: how to schedule the shutdown. Of course the easiest way of doing this is to create a cron task. So when we schedule a power off, necromancer.py will create a file in /etc/cron.d/necromancer_poweroff which will call shutdown and then it will erase itself.
How do we schedule the power on? Using the rtc timer of course. To check if your computer supports rtc, you just have to check if /proc/driver/rtc file exists. The rtc clock is the clock we configure in our BIOS, is the hardware clock, so the first thing we have to check is whether the rtc clock and the system clock have the same date and time (our computer could be using NTP so the hardware clock could be completely different from system time). Also is important to care about the time zone so that hwclock usually is set to UTC and the system time is calculated using it's time zone.
To solve this problem, the firs thing that necromancer.py does is to set the hardware clock to UTC based on the system time. This way we can ensure that if we work in UTC, the time will be always the same in system and rtc.
Now we can schedule a power on date. This can be done by writing the date into the file /sys/class/rtc/rtc0/wakealarm. That date must be in UTC time. We can test with these commands:
# echo 0 > /sys/class/rtc/rtc0/wakealarm # date -d "1:25:00 Dec 22, 2012" +%s >/sys/class/rtc/rtc0/wakealarm # cat /proc/driver/rtc rtc_time : 12:56:37 rtc_date : 2011-12-09 alrm_time : 00:25:00 alrm_date : 2012-12-22 alarm_IRQ : yes alrm_pending : no update IRQ enabled : no periodic IRQ enabled : no periodic IRQ frequency : 1024 max user IRQ frequency : 64 24hr : yes periodic_IRQ : no update_IRQ : no HPET_emulated : yes BCD : yes DST_enable : no periodic_freq : 1024 batt_status : okay
The first line resets the rtc wake alarm. We set the wake alarm in the second line (note the -u parameter to use UTC). In the third line, we see the lines alrm_time and alrm_date. As you can see, the alarm is established for the date we specified. I'm in UTC+1 so the time is one hour less to be in UTC.
All this is done in necromancer.py. But, how the software works altogether? Well, It's simple, once you have changed the schedule in the configuration file stored in /etc/power_scheduler.cfg by default, the powerscheduler.py tool has to be called. This can be done calling the script from cron each five minutes or so but I recomment to use icron as the best solution. I'll talk about icron in other post some day.
Well, thats all for now. All the files used here are stored in https://github.com/diego-XA/dgtool/tree/master/powerscheduler. Feel free to use the tool and modify it.
|
OPCFW_CODE
|
[Greylist-users] greylisting and VERP
raeburn at raeburn.org
Sun Oct 5 23:13:38 PDT 2003
Hi. I've only just installed relaydelay on my mail server, though
I've been following the list (via the archive) for a little while.
Aside from annoying little things like supposedly legitimate mailers
that never retry, and the delays on VERP with per-message envelope
senders, it seems like a great scheme. (At least until the spammers
all start resending after several hours' delay.)
I'm on more than one list that uses a per-message envelope sender for
tracking bounces. And I can't quite agree with the comments in the
greylisting web page that suggest it's a broken idea. The
recommendation in the greylist docs seems to be just to live with the
delay for every message.
For all the cases I've seen, there's a numeric field present, in one
of a small number of fairly simple forms:
liststuff-###-###-###-encodingofmyaddress at host (yahoo groups),
liststuff-###-addr at host, and occasionally liststuff+M###@host.
Is there some reason not to stick regular expressions for these forms
someplace and boil them down to a common form? A "from whom do I have
mail" script I wrote some time back does this substitution on names
before doing a unique sort, and it works fairly well:
| sed -e 's/-[0-9][0-9\-]*-raeburn/-#-raeburn/g' \
-e 's/-[0-9][0-9\-]*-kr/-#-kr/g' \
-e 's/+M[0-9][0-9]*@/+M#@/g' \
-e 's/+M[0-9][0-9]*=/+M#=/g' \
Now, maybe in the Yahoo Groups case, it would make sense to keep the
group number, except of course that Yahoo Groups is lame enough that
it needs to be whitelisted. Still, perhaps replacing a block of
digits surrounded by dashes, or preceded by "+M" and followed by "@"
or "=", would let the list messages come through without delay, and
without opening up the recipient to too much spam?
I guess a spammer could try forging "spammer-1-foo at aol" on one pass
and "spammer-2-foo at aol" on another pass, to avoid having the same
identity (which could have gotten marked as a spammer) show up too
often, and that would get him past the greylist filter with this
change. Is that likely to be a big problem? Maybe it could be a
per-host or per-envelope-sender-domain substitution, installed (in
relaydelay.pl, relaydelay.conf, or the database) manually (simple but
tedious), or automatically by a maintenance script detecting a pattern
in successfully delivered messages (automatic but hard)?
The general idea seems kind of obvious to me, which makes me figure
it's probably been considered before. Am I missing something? Would
this not work, or open up the user to too much spam?
More information about the Greylist-users
|
OPCFW_CODE
|
more advanced Lab tests for computers
Faced with the rapid development of technologies, the creation of new varieties and the interest of enthusiasts, Labo Fnac is changing the protocol of computer performance evaluation. Even more demanding and closer to the reality of use, they allow you to test all products, from Cloud Computing (ChromeBook and Windows S) to central units dedicated to Gamers, including all-in-ones. One Protocol to rule them all!
Indeed, it was built on games like the old protocol Call of duty modern warfare. It’s a demanding title, but it’s been the same since 2007. If it represented the highest at the time, it is limited to 91 fps (pictures per second) and is no longer suitable for testing the latest generations of graphics cards. The same goes for other popular titles ugly rally and cry awayeven if the measurements taken are not included in the calculation of the score.
On the office side, Adobe solutions were used, as well as important Word and Excel programs. The tests consisted of creating large files with various filters and macros, documents containing hundreds of chapters or spreadsheets.
However, this software has evolved into so-called “cloud” solutions, whose performance is now more dependent on the quality of the Internet connection and available servers, making all measurements inconsistent with the associated rating system.
So what to do? Labo Fnac chose to start again with the same care to be fair among all the products tested.
Advances in technology have prompted the laboratory to develop testing protocols as well. But even if that means changing everything, you can do it the other way around. Javare Traore (computer test manager), instead of using classic test programs like most specialized sites, preferred well-known open source programs such as ImageJ, developed by the US Institute of Health and widely used by scientific research. community for image processing. Of course, protocols are designed to estimate the use of computers by users, and ImageJ will be used to evaluate performance in office automation.
By the way, just install the built-in app called PerfLab to perform some measurements right out of the box. It should be noted that the autonomy test has also evolved somewhat and there are still measurements at the level of the sound part and screens related to laptops.
Pushing computers to their limits
The new test protocols have a clear goal: the machine must be used to its maximum capacity, while targeting each component individually, for example, all cores of the CPU (main processor). Something that cannot be done with Word, for example.
For this first test, Labo Fnac uses the popular Blender software. A reference in the world of 3D modeling, which allows you to use new graphics card functions such as Ray tracing at up to 360 frames per second in HD, will be the basis for game recording.
The protocol consists of 3D modeling of the entire Earth, the Moon and the Milky Way as a background. And as if that wasn’t enough to use a variety of textures up to 38K (21,600 x 10,800 pixels) developed by NASA.
As we wrote above, this test allows you to specifically target the CPU by performing three different renderings at 2K, 4K and 8K. In the screenshot below, we can see that all cores are required when rendering is enabled. The video card is also used, and the operation may take several minutes depending on the performance of the computer. That’s also one of the advantages of this particular test: it’s not about being satisfied with simple top speed.
After the test is completed, a file is created and can no longer be changed, and it is the file where the results are written. Also note that the operation is repeated at 2, 4 and 8K not only for the CPU, but also for the GPU (graphics processor).
But that’s not all. The tester also uses a BMW 27 to confirm these homemade 3D renderings. This is a rendering specification file widely used on the Internet. The latter allows us to compare rendering performance with a global database.
Finally, there is also an animation presentation with Eevee, Blender’s real-time engine based on the OpenGL cross-platform interface. The latter allows you to target the zero GPU, that is, the main graphics chip, which is generally the least powerful and consumes the least. During this step, animation rendering is performed on 11 images. Then it remains to note the duration of the rendering of the latter.
All the results of these tests are imported into the laboratory database. Incidentally, the lab also observes write times on internal storage, be it a good old hard drive or SSD. Similarly, the log file collects other useful information such as RAM performance.
Unable to test Nvidia graphics card like AMD model or M1 processor. The new Fnac Lab protocol allows you to use the rendering engine most suitable for the graphics component (directX, CUDA, OptiX, Metal) and adapt to all platforms MacOS, Windows, Linux, ChromeOS, Windows S. Proof of demand from these tests, even the extremely powerful RTX 3090 he bent his knees during the assessment.
Tests adapted to real use
In addition to raw performances, interest is also focused on the specific uses of users. In the next step, the protocol simulates the needs of a photographer or graphic designer who will handle large files.
This batch processing uses photos to be processed by ImageJ software. The principle is quite simple, a target folder that groups the input photos and an empty output folder that groups the files to which the macro will be applied. This will add multiple tags to each image, resize them, and save them under a new name.
Web apps are now essential on our computers, smartphones, tablets and even TVs. To evaluate their performance, the lab uses the main video card (not the second, more powerful GPU, the GPU 0 we talked about above) and the Pixel Fill Rate test, which targets the ability to generate a number of pixels for each pixel. the second.
This test allows you to get maximum FPS and determine intelligent performance management without automatically switching to the most powerful GPU 1 when the system is under heavy load. The advantage of this script is that it can adapt to the performance of the product it is running on (PC, mobile device, etc.) and can go up to GPU saturation. The resulting curve shows the time in milliseconds on the ordinate, and the number of megapixels on the abscissa. Its shape indicates whether the component is stable or not.
After all, it is impossible to test a computer without showing interest in the gaming part. The new test protocol is divided into two parts. On the one hand, the GPU rendering seen above, and on the other, real-time rendering. The latter is again made with Blender, but also with FPS Monitor.
The program is configured to use wire rendering (Wire) first, then solid rendering, and finally rendering with the EeVee engine. raytracing, that is, to display the reverse of the animation on the screen. FPS Monitor provides the number of frames per second (fps) during three tests. These results, related to GPU rendering performance, create an overall score for the game portion.
Extensive testing in all areas
As before, other tests are carried out to evaluate screens, sound quality or even autonomy related to laptops. So, the autonomy test also improved a bit by implementing a solution for continuous video streaming from VLC.
Screens are tested with professional equipment used for televisions. Recall that the following are evaluated: colorimetry (but only in sRGB, unlike TVs tested in DCI-P3 and rec2020), gamma and orientation. Another key difference from testing on TVs is that the contrast ratio measurement is not based on the ratio of black to white levels, but rather with 5% black (to realistically compare LCD panels and LCD panels). OLED panels!). Finally, the pixel density is displayed, something we don’t do with TVs.
Finally, the audio tests have also improved a bit. Indeed, the jack plug test was removed because it was time-consuming for little interest (with the arrival of bluetooth headphones and the removal of jack ports on new computer chassis). In addition, the loudspeakers are tested in an anechoic chamber by placing the computer on a test table at a height of 40 cm. The microphone is positioned in front of the user to simulate the position of the user when sitting in a chair and in front of a laptop on a table. The screen angle is always the same to compare the results.
During this step, the frequency response measurement is usually performed with a bandwidth ranging from 63 Hz to 16 kHz (notebooks are limited in terms of low frequencies). Finally, a distortion measurement at 1 kHz for 3% distortion with the most common frequency and always the same configuration at the microphone level.
All these tests allow you to get different ratings of the radar to compare computers and accompany you in your choice.
|
OPCFW_CODE
|
# Routines to perform Chi Squared tests.
# Used for fingerprinting unknown areas of high entropy (e.g., is this block of high entropy data compressed or encrypted?).
# Inspired by people who actually know what they're doing: http://www.fourmilab.ch/random/
import math
from binwalk.core.compat import *
from binwalk.core.module import Module, Kwarg, Option, Dependency
class ChiSquare(object):
'''
Performs a Chi Squared test against the provided data.
'''
IDEAL = 256.0
def __init__(self):
'''
Class constructor.
Returns None.
'''
self.bytes = {}
self.freedom = self.IDEAL - 1
# Initialize the self.bytes dictionary with keys for all possible byte values (0 - 255)
for i in range(0, int(self.IDEAL)):
self.bytes[chr(i)] = 0
self.reset()
def reset(self):
self.xc2 = 0.0
self.byte_count = 0
for key in self.bytes.keys():
self.bytes[key] = 0
def update(self, data):
'''
Updates the current byte counts with new data.
@data - String of bytes to update.
Returns None.
'''
# Count the number of occurances of each byte value
for i in data:
self.bytes[i] += 1
self.byte_count += len(data)
def chisq(self):
'''
Calculate the Chi Square critical value.
Returns the critical value.
'''
expected = self.byte_count / self.IDEAL
if expected:
for byte in self.bytes.values():
self.xc2 += ((byte - expected) ** 2 ) / expected
return self.xc2
class EntropyBlock(object):
def __init__(self, **kwargs):
self.start = None
self.end = None
self.length = None
for (k,v) in iterator(kwargs):
setattr(self, k, v)
class HeuristicCompressionAnalyzer(Module):
'''
Performs analysis and attempts to interpret the results.
'''
BLOCK_SIZE = 32
CHI_CUTOFF = 512
ENTROPY_TRIGGER = .90
MIN_BLOCK_SIZE = 4096
BLOCK_OFFSET = 1024
ENTROPY_BLOCK_SIZE = 1024
TITLE = "Heuristic Compression"
DEPENDS = [
Dependency(name='Entropy',
attribute='entropy',
kwargs={'enabled' : True, 'do_plot' : False, 'display_results' : False, 'block_size' : ENTROPY_BLOCK_SIZE}),
]
CLI = [
Option(short='H',
long='heuristic',
kwargs={'enabled' : True},
description='Heuristically classify high entropy data'),
Option(short='a',
long='trigger',
kwargs={'trigger_level' : 0},
type=float,
description='Set the entropy trigger level (0.0 - 1.0, default: %.2f)' % ENTROPY_TRIGGER),
]
KWARGS = [
Kwarg(name='enabled', default=False),
Kwarg(name='trigger_level', default=ENTROPY_TRIGGER),
]
def init(self):
self.blocks = {}
self.HEADER[-1] = "HEURISTIC ENTROPY ANALYSIS"
# Trigger level sanity check
if self.trigger_level > 1.0:
self.trigger_level = 1.0
elif self.trigger_level < 0.0:
self.trigger_level = 0.0
if self.config.block:
self.block_size = self.config.block
else:
self.block_size = self.BLOCK_SIZE
for result in self.entropy.results:
if not has_key(self.blocks, result.file.name):
self.blocks[result.file.name] = []
if result.entropy >= self.trigger_level and (not self.blocks[result.file.name] or self.blocks[result.file.name][-1].end is not None):
self.blocks[result.file.name].append(EntropyBlock(start=result.offset + self.BLOCK_OFFSET))
elif result.entropy < self.trigger_level and self.blocks[result.file.name] and self.blocks[result.file.name][-1].end is None:
self.blocks[result.file.name][-1].end = result.offset - self.BLOCK_OFFSET
def run(self):
for fp in iter(self.next_file, None):
if has_key(self.blocks, fp.name):
self.header()
for block in self.blocks[fp.name]:
if block.end is None:
block.length = fp.offset + fp.length - block.start
else:
block.length = block.end - block.start
if block.length >= self.MIN_BLOCK_SIZE:
self.analyze(fp, block)
self.footer()
def analyze(self, fp, block):
'''
Perform analysis and interpretation.
'''
i = 0
num_error = 0
analyzer_results = []
chi = ChiSquare()
fp.seek(block.start)
while i < block.length:
j = 0
(d, dlen) = fp.read_block()
if not d:
break
while j < dlen:
chi.reset()
data = d[j:j+self.block_size]
if len(data) < self.block_size:
break
chi.update(data)
if chi.chisq() >= self.CHI_CUTOFF:
num_error += 1
j += self.block_size
if (j + i) > block.length:
break
i += dlen
if num_error > 0:
verdict = 'Moderate entropy data, best guess: compressed'
else:
verdict = 'High entropy data, best guess: encrypted'
desc = '%s, size: %d, %d low entropy blocks' % (verdict, block.length, num_error)
self.result(offset=block.start, description=desc, file=fp)
|
STACK_EDU
|
This is how I used to build QMC2 with old MAME tools, described in a little guide which I made: http://forums.bannister.org/ubbthreads.php?ubb=showflat&Number=94168#Post94168How to compile QMC2 in Windows (from the beginning... Step-by-step... the easiest way)
You'll need: a computer, internet and desire to build QMC2 for yourself!1- Go to: http://mamedev.org/tools/
*if you're using a x86 Windows platform, download the first binary.
*and if your Windows is a x64 platform, download the second.NOTE: Well, now we have different tools there 2- Extract the content of the folder, and put it in C:/ (or wathever is the name of your root directory).
Now execute the setup-Python.bat and setup-Qt.bat from this directory.
You have the compiling environment ready for compiling QMC2!
Now let's get the code.NOTE: These .bat files doesn't exist anymore with the new MAME tools3- Download most recent version of tortoise SVN and install in your computer.http://tortoisesvn.net/downloads.html4- Create a folder anywhere, and right click with the mouse. Choose "SVN Checkout".
and click OK. tortoise SVN will get the most recent code from SVN for you. After that, for the next time that you'll compile, you can skip steps 1-4 and just choose "SVN Update" to get the most recent version of the code.
Recent code in the directory, lets compile QMC2.NOTE: Ok, Tortoise still usable for get the source code 5- Write the follow text in notepad, and save as <anything you want>.bat (to build a windows x64 version of QMC2)
echo QMC2 Enviroment Ready!
echo To Build QMC2-MAME type:
make clean MINGW=1
make EMULATOR=mame WIP=1 MINGW=1
PAUSENOTE: This part, is the most problematic, as these directories doesn't exist anymore in the new MAME build tools6-Execute the .bat file.
If no errors happens, it will generate QMC2 for your a x64 Windows.7- Create a new folder.
Name it, for example "QMC2 aplication".You'll need to put the following files in this place:
1. From qmc compile folder take "data" folder and copy in "QMC2 aplication" folder.
2. Copy from your mingw64-w64\Qt\bin\, only .dlls to "QMC2 aplication" folder.
3. Copy from your \mingw64-w64\Qt\plugins\ folder -"sqldrivers" to "QMC2 aplication".
4. Copy from same folder "phonon_backend" to "QMC2 aplication".
5. Copy from your \mingw64-w64\x86_64-w64-mingw32\bin\SDL.dll to "QMC2 aplication".
*for windows x86, "mingw64-w64" will be "mingw64-w32".
You can keep this folder for future compilations. Look to the left image to see how it will look like.8- From the folder from the code, find "release" directory and grab the QMC2 variant that you compiled.
You can put in the folder together with the files in the "QMC2 aplication".9-Execute QMC2 from there and PROFIT!
|
OPCFW_CODE
|
how to display user name and activity in cowaxess output
my ssl_access_log notes date, time, user and activity (below, downloading a pdf)
(IP ADDRESS) -<EMAIL_ADDRESS>[11/May/2021:08:30:22 -0400] "GET /pdf/my.pdf HTTP/1.1" 401 -
how can I tweak my command so my cowaxess report.html file displays this level of information?
Please try this, it should work:
goaccess access.log --log-format='%h %^ %e [%d:%t %^] "%r" %s %b' --date-format=%d/%b/%Y --time-format=%T
or HTML output
goaccess access.log --log-format='%h %^ %e [%d:%t %^] "%r" %s %b' --date-format=%d/%b/%Y --time-format=%T -o report.html
Closing this. Feel free to reopen it if needed.
Thanks for your help on this.
I think the issue is that my log file lists the user’s IP address first and user name second:
xxx.xxx.xxx.xxx - username [09/Jun/2021:10:51:06 -0400] "GET /favicon.ico HTTP/1.1" 404 306
When I run the report per your instructions, it lists user by IP, not user name
From: Gerardo O. @.>
Sent: Tuesday, June 22, 2021 3:03 PM
To: allinurl/goaccess @.>
Cc: Connelly, Christopher S @.>; Author @.>
Subject: Re: [allinurl/goaccess] how to display user name and activity in cowaxess output (#2134)
Closed #2134https://github.com/allinurl/goaccess/issues/2134.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/allinurl/goaccess/issues/2134#event-4924667842, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AUN4C4BYDYTWV7Y44RO5QXTTUDM5VANCNFSM46PEQ4JQ.
Are you able to see what's the last panel on the report? Here's what I see from my end:
I do get this:
@.***D7677E.1957BE80]
Frankly, there’s TOO much and not enough information in this report.
I’d prefer something simple that I can show my managers that notes that x user downloaded y pdf file on z date ... perhaps reversible to show that y pdf was downloaded x number of times in this period by this many users.
From: Gerardo O. @.>
Sent: Tuesday, June 22, 2021 3:31 PM
To: allinurl/goaccess @.>
Cc: Connelly, Christopher S @.>; Author @.>
Subject: Re: [allinurl/goaccess] how to display user name and activity in cowaxess output (#2134)
Are you able to see what's the last panel on the report? Here's what I see from my end:
[2021-06-22-143017_470x261_scrot]https://user-images.githubusercontent.com/5005367/122987594-633d0380-d366-11eb-8f54-5be00745ced1.png
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://github.com/allinurl/goaccess/issues/2134#issuecomment-866273090, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AUN4C4EEFMFYLIA3CUMJYLTTUDQHBANCNFSM46PEQ4JQ.
It's possible cowaxess is doing something else to display that. I'm not the one maintaining that repo so I can tell. However, you could always try running it via cygwin or generating the reports through a Linux box. Sorry about that.
Soon goaccess will be supporting probably one of the nicest features "search/filtering" which will address very easily the report you are after. Stay tuned :)
|
GITHUB_ARCHIVE
|
My Experience with Mac OSX Lion
So I was eagerly anticipating the release of Mac OSX Lion. It’s not cause I’m a huge Mac fan or anything – far from it, but I like new toys and Windows 7 is getting old. I have a Mac Mini server at the house that I use for day to day ‘house’ use and as a toy. It provides decent amount of entertainment. So when Lion came out, I was prepared:
- Had run Disk Utility to check for errors – there were errors so I had to get into safe mode to do a disk check thing. Pretty hard to do when all the instructions are ‘boot from the DVD and run it in the GUI’ – cept I don’t have a DVD drive on the Mac Mini and couldn’t for the life of me get Remote Disk to work from my Windows machine. That’s just dumb. But whatever, go that fixed.
- Updated to 10.6.8. This is normally a decently seamless process for most folks, however, my Mac mini, for reasons passing understanding, never would update automatically – would always get hung on ‘moving files into place.’ So a manual install of that from Apple support did the trick.
Next up was to get Lion. Now, folks have been preparing for this for a while now, so I will admit to my surprise the sheer number of things that weren’t ready. Lion reminds me of Windows Vista! Check out the decently long list of things that are now busted:
- I use DisplayLink USB port to add a second monitor. Fail. New driver – coming soon.
- I use AirDisplay from Avatron to add screen sharing to my iPad. Fail. New driver – coming soon.
- I have an Iomega d200 NAS that keeps all of our everything on the network. Their AFP driver is now blocked by Apple for TimeMachine and basic network access. Fail. New driver(s) – coming soon.
- The built in web server (wiki, blogs, etc.) that come with Apple Server which was working fine under Snow Leopard. Fail – everything is 503’ing. I can get it to render but only if I turn off Wiki. Ability to ‘reset or reinstall these components?’ Missing entirely.
- The new much hyped ‘Server’ app – massive Fail. First, they moved some functions of server out of Server Manager to Server, but not all of them. So I had to go download the server admin tools just to manage things like DNS/DHCP/XSan (for Podcasting), etc. The other things like Web/Email/etc that did move to the Server app – no ability to natively see the logs to try to find out what’s wrong. Massive. MASSIVE FAIL. Not being a *nix nerd, I don’t know what to do to try and get these things back up and going. The Google’s seem to indicate it’s a permissions problem. Nice.
Otherwise, it sure does look perty and seems to have some interesting new features but holy crap. I thought Mac stuff was supposed to ‘just work.’ Seems a lot like the pot calling the kettle black looking back at those hilarious Apple commercials. Anyways, I’m sure things will get cleaned up, particularly on the 3rd party side, but Lion’s been in dev for a while – you’d think vendors would have learned from Microsoft and Vista.
|
OPCFW_CODE
|
Controlling DC Motors using Arduino and IR Remote
/7/2018UWP app cannot establish Bluetooth connection to Arduino Uno! Ken76. 1 Replies
Simple Remote Display with Mobile App - Arduino Project Hub
This post is an introduction to the Nextion display with the Arduino. Nextion Display with Arduino – Getting Started. 71 Shares. Arduino – Control LEDs with IR Remote Control. 24 May, 2018. Arduino Time Attendance System with RFID. Previous Post ESP8266 Voltage Regulator
Arduino Home Automation Project Using IR Remote Control
Arduino Stack Exchange is a question and answer site for developers of open-source hardware and software that is compatible with Arduino. Join them; it only takes a minute: Sign up. Here's how it works: Best way to send text to ESP8266 remote display. Ask Question 0.
Using Arduino Sketch - SmartMatrix Remote Controlled LED
The remote control frequency may depend on the remote control model. This transmission from remote has some form of modulation so the transmission range of remote control signal is quite far. The main heart of arduino home automation project using IR remote control is …
GitHub - z3t0/Arduino-IRremote: Infrared remote library
Learn how to decode any IR remote control using Arduino. We have decoded one IR remote and extracted all the signals. TV remote control or ir remote send the signal at long distance it sends the signal at some frequency like 38KHz. /* Decode the IR Remote Control signal and display the button/key name on the serial monitor using the
Arduino Bluetooth Remote Lcd Display: 7 Steps
While you can get the Arduino IR Remote Library on GitHub it is also available (and easier to install) directly from the Library Manager in your Arduino IDE. Open the Library Manager and search for “IR Remote”. We’ll use the serial monitor to display them. Using IR Remote Controls with Arduino. Description. Learn to use and re
Arduino - DroneBot Workshop
Circuit Diagram – Interfacing Arduino to 4 Digit 7 Segment Display using Shift Register. As you can see in the diagram we are using an IC named 74HC595N in addition to arduino uno and 4 digit seven segment display. 74HC595N – is a shift register IC and it converts serial data to parallel data.
Using LCD Displays with Arduino - YouTube
Arduino: Display data over local network. August 19, 2014 August 19, 2014 admin Uncategorized. Arduino can gather and display all sorts of data from its sensors (or what ever you connect to it), but the real power from the data comes when you can monitor it over network.
|
OPCFW_CODE
|
I am trying to install Perl 5\.005\_03 on a VAX
running VMS 6\.2
I am using DECC compiler V5\.6\, MMS version 3\.2\,
Multinet V4\.1B and Decnet phase V\, V6\.3 ECO 10\.
For what it is worth I did not have any trouble with
a machine equipped as similar to yours as I can scrounge
up: DEC C V5.3-006 on OpenVMS VAX V6.2, MMS V3.1-03,
V4.0 Rev C, Decnet phase IV.
That is "mms test" returned for me:
All tests successful.
u=18.52 s=0 cu=0 cs=0 files=165 tests=6163
In the Test phase of the build\, I get 1 error\.
It is on the module \[\.T\.OP\]READ\.T
Below is documentation of the error\, etc\.
In the compile/link phase\, there is one
informational message\. I will include it\, in
case it is relevant\.
I did the "MMS" twice with "MMS realclean"
I got the source from the internet \(latest\_tar\.tar\)
We un\-g\-zipped it and untar'ed it on a PC\.
We then FTP'd the modules from the PC to the VAX\.
Somehow the FTP did not copy VMSISH\.H\, so I went back
and FTP'ed that one by itself\. I have re\-FTP'ed
the READ\.T but it stil fails the test\.
This was the source of some of the trouble you had with missing files
when you ran configure.com. Try using gunzip.exe and vmstar.exe on your
VAX. These URLs are from README.vms:
2) GUNZIP/GZIP.EXE for VMS available from a number of web/ftp sites.
3) VMS TAR also available from a number of web/ftp sites.
MCR $29$dua53:\[perl\_5\_005\_03\]miniperl\.exe "\-I\[\-\-\-\.lib\]" "\-I\[\-\-\-\.lib\]"
xtUtils\]xsubpp \-typemap \[\-\-\-\.lib\.ExtUtils\]typemap STDIO\.xs >STDIO\.C
if \(\(retsts = sys$setddir\(&dirdsc\,0\,0\)\) & 1\) ST\(0\)
%CC\-I\-IMPLICITFUNC\, In this statement\, the identifier "sys$setddir" is
implicitly declared as a function\.
At line number 208 in stdio\.xs\.
%VCG\-I\-SUMMARY\, Completed with 0 error\(s\)\, 0 warning\(s\)\, and
1 informational messages\.
At line number 513 in stdio\.c\.
$ mms test
\[\.op\]read\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.FAILED on test 1
Failed 1 test\, 98\.19% okay\.\<CR>
u=39\.79 s=0 cu=0 cs=0 files=164 tests=5994\<CR>
For what it is worth the informational message from the compiler and
the failure of read.t are unrelated. (The informational message was
the DECC compiler building the VMS::Stdio module which is not
tested by read.t). I too saw the informational message with DECC 5.3-006.
Do you by chance have any logicals such as "T" or "OP" or "READ",
that is, does:
show logical t/table=*
turn up anything? That might have been part of the trouble.
The other thing that looks suspicious is that you defined
PERL_ROOT and did a SET DEF PERL_ROOT:. Although
the build procedure was originally created to avoid any trouble
if you did that, I am not sure how well tested it is.
That is if you did:
set def $29$DUA53:[PERL_5_005_03]
You might get that test to pass.
At any rate, if I were you I would definitely start with a fresh
kit, transferred using binary ftp directly to the VAX and
gunzip then tar -xvf it.
Migrated from rt.perl.org#949 (status was 'resolved')
Searchable as RT949$
The text was updated successfully, but these errors were encountered:
|
OPCFW_CODE
|
In our Previous Article, we have discussed What is Proxy Server? How do Proxy Servers work? And some need proxy servers. Now you may wonder How to Install and Configure a Proxy Server on your own. So, here is a Tutorial for Installing a proxy server on your own and Configure on various Operating systems and platforms. Here, the proxy server installation means software that runs on your VM or physical machine and configured so that users can use it. So, we take one of the popular opensource Proxy Software called Pi-Hole
Table of Contents
Supported Operating System and Platform
Pi-Hole can be instantly run as a container with Docker. Also, we can install the Pi-Hole in the following Operating System.
Installation of Pi-Hole Proxy Server
As we said, Pi-Hole can be easily installed and Run using Docker Container. Lets see how to Run Pi-Hole in Docker container first.
Install and Configure Pi-Hole on Docker Container.
One of the easy way to create a Docker Image and running Pi-Hole is to use the Docker Source Code of the Pi-Hole.
Clone it by following command.
$ git clone https://github.com/pi-hole/docker-pi-hole.git cd docker-pi-hole
Then, look for the file called
docker-compose.yml.example and take a copy. after that, rename the Copy to
Then, Edit the docker compose file if needed. And the default Docker compose file will look like this
version: "3" # https://github.com/pi-hole/docker-pi-hole/blob/master/README.md services: pihole: container_name: pihole image: pihole/pihole:latest # For DHCP it is recommended to remove these ports and instead add: network_mode: "host" ports: - "53:53/tcp" - "53:53/udp" - "67:67/udp" - "80:80/tcp" - "443:443/tcp" environment: TZ: 'America/Chicago' # WEBPASSWORD: 'set a secure password here or it will be random' # Volumes store your data between container upgrades volumes: - './etc-pihole/:/etc/pihole/' - './etc-dnsmasq.d/:/etc/dnsmasq.d/' # run `touch ./var-log/pihole.log` first unless you like errors # - './var-log/pihole.log:/var/log/pihole.log' # Recommended but not required (DHCP needs NET_ADMIN) # https://github.com/pi-hole/docker-pi-hole#note-on-capabilities cap_add: - NET_ADMIN restart: unless-stopped
Then Simply run following docker command to run the Pi-Hole Proxy Server
$ docker-compose up –detach
That’s all, Your Pi-Hole Proxy server is up and running. There another simplest way to do the same. You can simply run the shell script under the same directory to create the Pi-Hole Proxy server docker container.
That’s all with the Pi-Hole Proxy Server docker container.
One Step Installation of Proxy Server
One Step Installation of Pi-Hole Proxy Server on all Linux Distro mentioned above is, Run the following curl command.
$ curl -sSL https://install.pi-hole.net | bash
And we have two more alternative methods to install the proxy server.
Method 1: Clone the repository and run the basic-install.sh script
$ git clone --depth 1 https://github.com/pi-hole/pi-hole.git Pi-hole cd "Pi-hole/automated install/" sudo bash basic-install.sh
Method 2: Manually download the installer and run the same shell script
$ wget -O basic-install.sh https://install.pi-hole.net sudo bash basic-install.sh
Configuring and Managing Pi-Hole Proxy Server
Pi-Hole is coming with an excellent Web Interface that will allow you to monitor, Configure and Manage the Pi-Hole Proxy Server. This Web interface will help you configuring and Managing the following features
Once you have installed the Pi-Hole Proxy on your machine, the final prompt from the command line will give you the password for the first-time user. You just need to copy the password and login into the Dashboard.
root@dockervagrant:/home/vagrant# git clone https://github.com/pi-hole/docker-pi-hole.git Cloning into 'docker-pi-hole'... remote: Enumerating objects: 49, done. remote: Counting objects: 100% (49/49), done. remote: Compressing objects: 100% (44/44), done. remote: Total 3729 (delta 18), reused 20 (delta 4), pack-reused 3680 Receiving objects: 100% (3729/3729), 1020.18 KiB | 418.00 KiB/s, done. Resolving deltas: 100% (2229/2229), done. Checking connectivity... done. root@dockervagrant:/home/vagrant# ls docker-pi-hole root@dockervagrant:/home/vagrant# cd docker-pi-hole/ root@dockervagrant:/home/vagrant/docker-pi-hole# ./docker_run.sh WARNING: Localhost DNS setting (--dns=127.0.0.1) may fail in containers. Unable to find image 'pihole/pihole:latest' locally latest: Pulling from pihole/pihole 45b42c59be33: Pull complete 2ca7a5291c9d: Pull complete b41c6299b20e: Pull complete 80860ce958e9: Pull complete 4dbc97c8f3ee: Pull complete 895160d4b8d8: Pull complete a2a31a2941ba: Pull complete 12c651e5b1a3: Pull complete 47d66783daaa: Pull complete bc2a1c51f34f: Pull complete d5765c5d17cd: Pull complete fea6f10554ce: Pull complete 651d68f3083f: Pull complete Digest: sha256:3a39992f3e0879a4705d87d0b059513af0749e6ea2579744653fe54ceae360a0 Status: Downloaded newer image for pihole/pihole:latest f7f178c62798b441ade7e2a31b7d48641bba792dba9b7a7fe00a43aea643f84b Starting up pihole container .......... OK Assigning random password: eNDM35sd Setting password: eNDM35sd for your pi-hole: https:///admin/
To get the dashboard, go to your browser and enter the IP address of the machine you installed Pi-Hole Proxy Server. So, the entire UI looks like the below screenshot.
In this article, we have discussed How to Install and Configure a Proxy Server. Along with it, we have discussed the Features and Functionalities of the Proxy Server. In our upcoming article, we will discuss How to set up your own VPN server and make the combination of VPN and Proxy to make your enterprise more secure. Stay tuned and subscribe DigitalVarys for more articles and study materials on DevOps, Agile, DevSecOps, and App Development.
Experienced DevSecOps Practitioner, Tech Blogger, Expertise in Designing Solutions in Public and Private Cloud. Opensource Community Contributor.
|
OPCFW_CODE
|
NF: Adds metadata translation functionality in dedicated class
This PR:
Is in response to this comment: https://github.com/datalad/datalad-catalog/issues/224#issuecomment-1401637670
Builds on top of and is made against: https://github.com/datalad/datalad-catalog/pull/237
Adds the abstract base class TranslatorBase from which any extension providing a new metadata translator should inherit (this follows a very similar design to the metalad implementation of a base extractor class)
By overriding a number of base class definitions, translators should provide the name and version of the extractor as well the version of the catalog schema that they are compatible with, and translators can also provide their own logic for translation (which could depend e.g. on jq or not)
Adds a Translate class which is instantiated with a metadata record in order to:
match the incoming metadata to an appropriate translator (by inspecting translators added as entry points and returning their match methods)
run metadata translation if an appropriate translator is found
Adds translate as a catalog subcommand (to be refactored later in bulk via https://github.com/datalad/datalad-catalog/issues/245))
Adds translator implementations for datacite_gin, bids_dataset, metalad_studyminimeta, and metalad_core based on the above classes as well as @mslw implementation here: https://github.com/mslw/datalad-wackyextra/blob/main/datalad_wackyextra/translators/datacite.py
Updates all schemas to comply with the refactored config / metadata_sources setup (see https://github.com/datalad/datalad-catalog/pull/237)
Updates workflows.py to use the added Translator functionality (removing old translator scripts)
Updates all existing tests to account for the changes in schemas, translators and workflows.
TODO:
[ ] add translator tests for core (dataset and file), studyminimeta, bids_dataset
[x] update documentation (to be done in bulk as part of https://github.com/datalad/datalad-catalog/pull/237 once this current PR is merged)
Old (but still perhaps useful to have documented here)
Sample testing code 1:
from pathlib import Path
from datalad_catalog import (
translate,
utils,
)
metadata_file = Path('datalad_catalog/tests/data/metadata_datacite_gin.json')
metadata_record = utils.read_json_file(metadata_file)
translate.Translate(metadata_record).run_translator()
Sample testing code 2:
from datalad_catalog.translate import get_translators
ts = get_translators()
ts_datacite = ts['datacite_gin_translator']
inst = ts_datacite['loader']()()
inst.match('datacite_gin', '0.0.1')
@mslw:
This is a massive change, in the best meaning of this word ;) Thanks for preparing these changes. I did not play with the PR, so it's a code review in a literal sense.
In the proposed form, TranslatorBase.match() takes source name and version - it would be good to have source ID it as an optional argument, so that we could utilise this feature of MetaLad extractors.
Agree 👍
If a translator wanted to implement a more complex logic than 1:1 match of version & name (and I think it should be left to the specific translator to implement - extractors may use whatever versioning schema), it would need to override match(), and still provide get_supported_extractor_version() & get_supported_extractor_name() anyway (if I understand @abc.abstractmethod correctly, a derived class needs to override all extracted methods before it can be instantiated).
If the implementation of a TranslatorBase was up to me, I would do without get_supported_extractor_version() & get_supported_extractor_name(), and instead make match() an abstract class method:
@classmethod
@abstractmethod
def match(cls, source_name: str, source_version: str, source_id: str | None = None) -> bool:
This way, match logic would be left entirely to the specific implementation, finding a matching translator class would not require instantiation, and splitting translator implementation into two classes (one for abstract methods, one for translation) would not be necessary.
I agree with this reasoning. One thing that I'm hesitant about is what to do in the scenario where translators inheriting from the base class create complex matching algorithms (in their overridden match method) that need to access instance methods because otherwise the. This would not be possible (e.g. I won't be able to access the self.get_supported_extractor_version() in the new class function) and such logic would have to be implemented in a different class or maybe in class-less functions in the same module
get_supported_extractor_version(). This is not necessarily a big issue and can be done, but perhaps there's a less hacky alternative?
|
GITHUB_ARCHIVE
|
Subscribe to RSS
Inherits from: UI. Implements interfaces: ILayoutElement. Thank you for helping us improve the quality of Unity Documentation. Although we cannot accept all submissions, we do read each suggested change from our users and will make updates where applicable. For some reason your suggested change could not be submitted. And thank you for taking the time to help us improve the quality of Unity Documentation.
Did you find this page useful? Please give it a rating:. What kind of problem would you like to report? It might be a Known Issue.
Please check with the Issue Tracker at. Thanks for letting us know! This page has been marked for review based on your feedback. If you have time, you can provide more information to help us fix the problem faster. Provide more information. You've told us this page needs code samples. If you'd like to help us further, you could provide a code sample, or tell us about what kind of code sample you'd like to see:.
You've told us there are code samples on this page which don't work. If you know how to fix it, or have something better we could use instead, please let us know:. You've told us there is information missing from this page. Please tell us more about what's missing:. You've told us there is incorrect information on this page. If you know what we should change to make it correct, please tell us:. You've told us this page has unclear or confusing information. Please tell us more about what you found unclear or confusing, or let us know how we could make it clearer:.
You've told us there is a spelling or grammar error on this page. Please tell us what's wrong:. You've told us this page has a problem. Please tell us more about what's wrong:. Is something described here not working as you expect it to? Please check with the Issue Tracker at issuetracker.
It only takes a minute to sign up. How can I display a variable say an integer variable in Unity's UI text? The variable may be a public variable from another script. Below is a script that will show the value myIntValueand only ever changes the value on screen when the value changes.
Parts of this script will be integrated into whatever script is modifying your integer value. You'll then have a UI component somewhere else that has a Text component on it. In the Unity editor, you'd drag the Text component onto this script in the inspector. This will assign the Text component to the textComponent variable in the below script, which will then get updated when the your value changes. Further, you can see a simpler version of this in the Unity tutorials on UI Text components.
Sign up to join this community. The best answers are voted up and rise to the top. Home Questions Tags Users Unanswered. How do I display a variable in a UI Text component? Ask Question. Asked 1 year, 9 months ago. Active 1 year, 9 months ago. Viewed 5k times. Can anyone please help me do it step by step?
Also did you check any tutorials online? Did you try googling it? You are asking a pretty simple question that I'm sure can be solved within 5 minutes of googling. It would be easier to tell us what you tried so far and why it failed, so that people can better understand what you are struggling with.Search Unity.
Log in Create a Unity ID. Unity Forum. Forums Quick Links. Asset Store Spring Sale starts soon! Unite Now has started! Come level up your Unity skills and knowledge. Joined: Mar 6, Posts: 5. Hi, might be a stupid question, but I haven't been able to figure out how to change the text for the TextMeshPro Text in my UI via script. Same with mText.
Unity - Text Element
Script attached to the GameObject and using TMpro. It works if i create it as a 3d object, but not as part of the UI. Not quite sure what I'm missing, looked over the script tutorial for the plugin and there it's just written as stated above.
Joined: Feb 26, Posts: 4, Text and designed to work with the CanvasRenderer and Canvas system. Ah, thanks, looked over the FAQ earlier, but missed that one. Joined: Dec 15, Posts: Here's my issue. What; ActionWhoField. Who; ActionWhenField. When; ActionRewardField.How to make a Dialogue System in Unity
The first time it works but when I add a new item in my list and load that one, I get the data from the previous one. Is there some special treatement when you try to put an empty screen in the text field of a texxtmeshpro? ZoidbergForPresidentApr 20, Joined: Sep 21, Posts: 1. To get the component type for something like this, just look at the text label of the component to the left of Script and remove the spaces.
The dark mode beta is finally here. Change your preferences any time. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. Now as a workaround to get more advanced html tags working in unity you might want to try a Html texture like the one provided here.
The idea that comes to mind is that you write your html formatted text as a normal. Might work dont know. Learn more. How to display html formatted text in unity new ui Text? Ask Question. Asked 3 years, 7 months ago. Active 3 years, 7 months ago.
Viewed 7k times. How could i display html formatted text to unity's new ui Text? Active Oldest Votes. Nikunj Rola Nikunj Rola 5 5 bronze badges. The main issue i am facing is i want to support html tags and would love to know if there is any way with unity ui to achieve it as nGUI has html support or is there any workaround for this.
Uri Popov Uri Popov 1, 1 1 gold badge 14 14 silver badges 29 29 bronze badges. Yes, Unity Rich Text has very very limited support as of now. HtmlTexturePlugin, i didn't found any example of using it for html formatted text its for rendering whole http site within unity3d, but thats not i needed.
What i want is similar to this assetstore. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog.
how to display a text on the screen
Podcast Programming tutorials can be a real drag.In previous versions, the following code works fine. I want to display HighScore. I still dont get it. The link to the unity page doesnt explain it either.
Change your preferences any time.
Update UI Text via Script?
Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. So at the start of my scene in my Unity game I want text to appear for a few seconds.
How would I do this? Place the script on the canvas and drag the text to the canvasText slot. Learn more. Make text appear for 5 seconds in Unity Ask Question. Asked 4 years, 7 months ago. Active 4 years, 7 months ago. Viewed 8k times. Anton nelson Anton nelson 6 6 silver badges 16 16 bronze badges. Active Oldest Votes. Justin Markwell Justin Markwell 5 5 silver badges 12 12 bronze badges.
Linking to an external site is generally not considered a good answer here, as the external link isn't guaranteed to be there forever. Can you edit your answer to explain what the solution is? Sign up or log in Sign up using Google. Sign up using Facebook.
Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Podcast Programming tutorials can be a real drag. Featured on Meta. Community and Moderator guidelines for escalating issues via new response….
Log in Create a Unity ID. Unity Forum. Forums Quick Links. Asset Store Spring Sale starts soon! Unite Now has started!
Come level up your Unity skills and knowledge.
Update UI Text via Script? Joined: Apr 1, Posts: Basically what I have done is created a Keypad with buttons that pass a variable to the script as a string.
Key Pad to Script, script to Text element. Code CSharp :. Collections. Last edited: Apr 2, PeppyZeeApr 2, Thought I would add this little update on the script Joined: Jun 14, Posts: 2, Why would you have to update the text every frame if the content of the text variable is only changing in the "MyFunction" function? And get every frame the same component over and over again is not so smart, just save it in a variable once.
Thank you fffMalzbier : That is what I was looking for, Warping it all up in one function!. Joined: Jul 10, Posts: That way you don't have to do a GetComponent call at all. Doing it each time your function is called is wasteful. This way you'll have a permanent reference to the Text element you're modifying. Animal2Apr 4, You must log in or sign up to reply here. Show Ignored Content. Your name or email address: Password: Forgot your password?
|
OPCFW_CODE
|
Looks like my RISK: Legacy post got some traction on GooglePlus and BGG. What's funny is that the key attributes of Legacy mechanics are something that most traditional RPGs take for granted. In-game choices having persistent effects in future sessions? Pretty standard stuff in the RPG realm.
Granted, that process usually doesn't involve destroying the actual object.
I think "destruction" is a misnomer in this case. Yes, you do change the game, but whether you consider that "destruction" is a matter of perspective. When I design a new game, I prune off many paths in the process. What Rob Daviau has done in RISK: Legacy is stop juuuust shy of that point in the process.
For example, in the very first game, you have a choice of two faction power stickers to put on your faction card. The one you don't choose is torn up and thrown away. Is that sticker destroyed? Yes. Is the game destroyed? No. The players are simply making the last decisions about how to arrange point values, terrain effects, and faction powers. They join Rob in the game design process. When faced with a choice like the one above, I really feel like I'm creating the game, not destroying it.
So, here are some best practices for making a Legacy-style game as far as I can tell.
1 | Creation
When you can name or label something, that act is satisfying in its own right, even without any mechanical effect. It can be a simple channel for vanity ("Georgetown") or something that reflects actual events in the game ("Dead Man's Valley"). There is great power in naming the unnamed. Don't feel compelled to tack on mechanical effects. Sometimes the warm feeling of seeing a continent named after your child is all you need. Speaking of which, this principle a great way to get kids involved. Kids love naming stuff.
2 | Persistence Make some choices have repercussions in all future games. The key to making this work is recording that data. This is where computers have an advantage on humans. Still, a good sharpie and some clear terms can help a lot. Who won the game? Make that mean something. Who hasn't won yet? Make that mean something. Are there features on the board to claim? How often is that done? Make that mean something. Are there unlockable components? How often do they get opened? The trick is pacing those changes. Know your game's probabilities and pin your game's dynamism to that curve. 3 | Stasis There is an endpoint where no further changes can be made to the game. This could be an organic endpoint, like stickers running out or spaces being filled in. It might also be a relatively arbitrary endpoint, like a certain number of sessions. Whatever your terms, this is the point where the players are done designing the game. Make sure the game is playable after this point, just not changeable. You'll still get those voices from the balcony grumbling about wasting money on a game that can't be played after a certain point. Just ignore 'em. ;)
So, am I right? Totally off base? Sound off in the comments!
|
OPCFW_CODE
|
|Keep It Simple, Stupid|
Summing up recent ideas into a concept: Code vs. Proseby deprecated (Priest)
|on Apr 29, 2001 at 20:30 UTC||Need Help??|
Folks, I've been mulling this about in my head for two weeks now. mothra noded something about the quality of perlmonks vs. irc vs. newsgroups. I initially thought, "pfff, maybe /s?he/ doesnt get very much out of perlmonks, but I sure do..."
So let me tell you, then, where this started really becoming foremost on my mind. I'm a real unix nut. I know several different flavors, I'm fluent in shell, and even know the internals of the various config files pretty darn well. My skillset and knowledge has pretty much kept me employed as a Unix/Networking weenie. Through being a Unix guy, I learned perl. My resume includes mentions of Unix, Perl, Shell, Apache, Cold Fusion, DB2, and MySQL. So when I go out looking for a new contract, I get offers from lots of places. Recently, I was brought on with an organization to do a number of things, but mostly, I am a perl hacker.
This is where mothra's post comes in. I was at work, taking a break from some code that was hurting my brain. I thought I would go and slip into some perlmonks, and also get some new code into my head, hoping for some insight or a novel way to approach what I was doing.
I didn't find a thing. I'm a pretty good perl programmer. I get what I need done, done. I even manage to sometimes come up with novel ways to do things myself. I'm at a precipice as a programmer. I have all the books I could about perl, and I'm starting to pick up books on Computer Science and other languages instead. I can't find any higher-level information. I am not getting any deeper into programming, perl, or computer science through Perl Monks.
Instead, I find that I'm stagnating here. I'm reading nodes, and theyre almost all prose. Only two of the Best Nodes of last week are actually code. I'd like to be electrified like the first time I read Lincoln Stein's code. I want to read nodes that shake my brain up and insert new ideas for how to code and ways to approach things.
I see a good parallel in WebHick's excellent recent post, Perl Enlightenment and Personal Journey To It. I'm knee deep in the water, it feels great, and I want to go scuba diving. The harder I look, though, the more I see that I have to go looking to other places to find new things in programming. For example, I've got these books in my shopping cart on amazon:
I'm already a professional. I dont have the luxury of going back to school full time to learn more about programming, and to learn "deep code." So I rely on the community. I'm getting nowhere. I had hoped perlmonks could push me past this speed bump. I learned some new things, but I'm still just a perl hacker. I'm still just somebody who writes code. Heck, I even dream (literally) solutions for code problems I have. But its just putting legos together, not envisioning the whole picture.
I discussed this with several people in the CB. While le correctly mentioned that this is something that gets tossed around a lot. Signal vs. Noise. I've been on USENET and BBS's and Mailing lists for 11 years now. We hear this over and over. I dont know what can be done about it, but I am venting particular observations in the hope we can do something about it this time.
While I was whining, Petruchio brought up an interesting term, "reputation inflation." This is, I think, to a large extent true. We see a lot of posts with HUGE reputations. This is due to the massive population growth of the monastery. As an abbot, I get 25 votes a day. I've only been here for six months, and made a bit less than 100 posts. I didnt really mean to, but I have made a whole lot of pretty useless posts. I try real hard to make sure that the post is useful and on-topic, but when I look back, I see that I havent contributed much of anything.
Perhaps Ovid's recent post, Stubborn as a Saint is indicative of this. I dont know what specifically got his attention, that he felt compelled to tell people to read his code more carefully. In fact, I dont think a lot of people here are reading code at all. We read conversation and discussion because its interesting. It is, however, junk food. We have no incentive to post code, nor to review code. "Code" posts get very low (usually < 35 ) reputations. This post, while full of rhetoric and interesting thought provoking ideas (okay, maybe I flatter myself), will probably score at least that within a couple days.
Why is this?
I dont know. I wish I had a solution. The thing is, we have had zillions of good suggestions proposed in Perl Monks Discussion, but few of them are ever implemented. This isnt a slight to vroom, but I think we could be fixing this problem and making better programmers.
We dont need to revamp the Voting/Experience System, because we can't agree on how to do it. And besides, thats a symptom, not a cause. People like me should not be getting to level 7 in six months. Its not because the voting and experience system is off, its because we're all posting these saccharin, brain junk food nodes.
Ugh. I cant come up with anything more to say useful on the subject. I think I made my point clear. I wanted to post this (despite the obvious irony of bitching about this specific kind of post) because we have users here who are capable of providing solutions for the monastery and maybe, just possibly, solutions for me. Yeah, tell me where to find enlightenent.
|
OPCFW_CODE
|
[Haskell-cafe] One-shot? (was: Global variables and stuff)
ahey at iee.org
Thu Nov 11 04:16:04 EST 2004
On Thursday 11 Nov 2004 6:36 am, Judah Jacobson wrote:
> AFAIKS, the definition of "once" that I gave uses unsafePerformIO in a
> perfectly sound manner; namely, the creation of a top-level MVar. It
> only becomes unsafe if certain "optimizations" are performed; but
> then, that's also true for the SafeIO proposal (as I understand it).
That's the trouble with unsafePerformIO. Haskell is supposed to be a
purely functional language and the compiler will assume all functions
are pure. As soon as you use unsafePerformIO to create something that
isn't a function you're in grave danger, even if it "looks safe" at
the local level, it still isn't a function and the damage can't be
contained at the local level. It's only really OK if it still is
a function despite the use of unsafePerformIO (which is possible,
but often hard to be sure about).
> Yeah, perhaps it hasn't been said so much before. :-) You noted
> several days ago that oneShot (a variant of my "once") can be defined
> using top-level mutable variables. I was just pointing out that the
> converse is true: top-level mutable variables can be emulated using
I'm not too sure about that. But I guess the devil's in the detail,
so until that's been thrashed out I'll reserve judgement.
> Again, I would assume that any translation of (x <- someAction) needs
> to have a prohibition on CSE, inlining, etc on the RHS; there's
> nothing special to once here.
Yes, but the trouble is in Haskell if you have x = y that really means
any occurance of x can be replaced by y (and vice-versa) without changing
the meaning of the program (subject to scoping rules of course).
You're right that your once solution and the (x <- someAction) both have
the same problem. But the difference is the compiler doesn't know that
with the once solution because you've told it that..
myRef = once (newIORef 'a')
..and it will believe you.
> I disagree that this only works for newIORef. Consider (in ghci):
Well no, of course newIORef isn't the *only* case where it works :-)
But I thought the point you were making was that the once guaranteed
that the resulting value was independent of when it got reduced
(presumably for any action). This isn't generally true, any more
than it would be true for the x <- someAction solution if some
action can be *any* IO monadic operation.
That's why the proposal to use a restricted monad was put forward
I.E. an "IO monad" which was not capable of doing any real IO.
In principle at least, with the SafeIO/CIO monads the resulting
initial value could be determined at compile time, which is
exactly what we want I think (same intial value for every program
More information about the Haskell-Cafe
|
OPCFW_CODE
|
What is continuous integration?
Building a feature with continuous integration
Practices of continuous integration
Benefits of continuous integration
Introducing continuous integration
Continuous integration tools
Continuous Integration● What is continuous integration?● Building a feature with continuous integration● Practices of continuous integration● Benefits of continuous integration● Introducing continuous integration● Final thoughts● Continuous integration tools● Links
What is continuous integration?Continuous Integration is a softwaredevelopment practice where members of ateam integrate their work frequently, usuallyeach person integrates at least daily - leadingto multiple integrations per day. Eachintegration is verified by an automated build(including test) to detect integration errorsas quickly as possible. Many teams find that thisapproach leads to significantly reduced integrationproblems and allows a team to develop cohesivesoftware more rapidly. This presentation is a quickoverview of Continuous Integration summarizing thetechnique and its current usage.
What is continuous integration?● "it cant work (here)"● "doing it wont make much difference"● "yes we do that - how could you live without it?" The term Continuous Integration originated with the Extreme Programming development process, as one of its original twelve practices. Although Continuous Integration is a practice that requires no particular tooling to deploy, it is useful to use a Continuous Integration server.● Integration is a "pay me now or pay me more later" kind of activity.
● Building a feature with continuous integration● Lets do something to a piece ● Update the working copy with of software,we assume its the changes from the others small and can be done in a & rebuild, check for clashes. few hours. ● It is yours responsibility to● Take a copy of the current create a successful build. integrated source onto your ● Commit your changes. local development machine. ● Build on the integration● Alter the production code, machine. and add or change the automated tests. ● Must fix the build quickly.● Build and run the automated ● Shared stable base, fewer tests. bugs, bugs show up quickly.
● Practices of continuous integration● Maintain a Single ● Keep the Build Fast Source Repository ● Test in a Clone of the● Automate the Build Production● Make Your Build Self- Environment Testing ● Make it Easy for● Everyone Commits To Anyone to Get the the Mainline Every Day Latest Executable● Every Commit Should ● Everyone can see Build the Mainline on whats happening an Integration Machine ● Automate Deployment
Maintain a Single Source Repository● Software projects involve lots of files that need to be orchestrated together to build a product.● Tools to manage all this - called Source Code Management tools, configuration management, version control systems, repositories, etc.● Everything you need to do a build should be in there including: test scripts, properties files, database schema, install scripts, 3rd party libs.● Keep your use of branches to a minimum.● In general you should store in source control everything you need to build anything, but nothing that you actually build.
Automate the Build● Automated environments for builds are a common feature of systems (Make, Ant, Nant, MSBuild, etc.)● A common mistake is not to include everything in the automated build (virgin machine – up!)● Incremental builds, component builds, targets● Its essential to have a master build that is usable on a server and runnable from other scripts (Do not depend much on IDE)
Make Your Build Self-Testing● A program may run, but that doesnt mean it does the right thing.● A good way to catch bugs more quickly and efficiently is to include automated tests in the build process.● CI has a weaker requirement of self-testing code then TDD.● For self-testing code you need a suite of automated tests that can check a large part of the code base for bugs.● The rise of TDD has popularized the XUnit family.● Tools that focus on more end-to-end testing, like FIT, Selenium, Sahi, Watir, FITnesse, etc.● You cant count on tests to find everything.
Everyone Commits To the Mainline Every Day● Integration is primarily about communication.● The one prerequisite for a developer committing to the mainline is that they can correctly build their code.● The key to fixing problems quickly is finding them quickly.● The fact that you build when you update your working copy means that you detect compilation conflicts as well as textual conflicts.● Since theres only a few hours of changes between commits, theres only so many places where the problem could be hiding. You can even use diff-debugging.● Frequent commits encourage developers to break down their work into small chunks of a few hours each. This helps track progress and provides a sense of progress.
Every Commit Should Build theMainline on an Integration Machine● Using daily commits, a team gets frequent tested builds.● People not doing an update and build before they commit, environmental differences between developers machines and other issues – prevent mainlines healthy state.● Integration build succeeds should the commit be considered to be done – developers responsibility.● Use a manual build or a CI server.● Do not just make builds on a timed schedule.● If the mainline build fails, it needs to be fixed right away. Youre always developing on a known stable base.● Its not a bad thing for the mainline build to break. Fix fast!● Patience and steady application – develop a regular habit of working mainline builds.
Keep the Build Fast● The whole point of CI is to provide rapid feedback.● For most projects the XP guideline of a ten minute build is perfectly within reason.● Start working on setting up a staged build.● Build pipeline – multiple sequential builds.● Fast commit build is the build thats needed when someone commits to the mainline.● Secondary build which runs when it can – for example tests that involve external services such as a database, etc.
Test in a Clone of the Production Environment● The point of testing is to flush out, under controlled conditions, any problem that the system will have in production.● You want to set up your test environment to be as exact a mimic of your production environment as possible.● Its common to have a very artificial environment for the commit tests for speed, and use a production clone for secondary testing.● Use virtualization.
Make it Easy for Anyone to Get the Latest Executable● People find it much easier to see something thats not quite right and say how it needs to be changed.● Anyone involved with a software project should be able to get the latest executable and be able to run it: for demonstrations, exploratory testing, or just to see what changed this week.● Well known place where people can find the latest executable. For the very latest you should put the latest executable to pass the commit tests (pretty stable).
Everyone can see whats happening● CI is all about communication, so you want to ensure that everyone can easily see the state of the system and the changes that have been made to it.● Tray monitors, lights, lava lamps, toy rocket launchers, etc.● Use a tool with a web site for dashboard, reporting and extended information.● Wall calendar for a QA team to put red & green stickers indicating healthy & broken builds.
Automate Deployment● To do CI you need multiple environments, one to run commit tests, one or more to run secondary tests.● Use deployment scripts to move between environments● If you deploy into production one extra automated capability you should consider is automated rollback.● Rolling deployments in clustered environments.● Trial build to a subset of users.
Benefits of continuous integration ●● The greatest and most wide ranging benefit of CI is reduced risk.● At all times you know where you are, what works, what doesnt, the outstanding bugs you have in your system.● CI doesnt get rid of bugs, but it does make them dramatically easier to find and remove.● Bugs are also cumulative. The more bugs you have, the harder it is to remove each one. Broken Windows syndrome.● If you have CI, it removes one of the biggest barriers to frequent deployment – between customers and development.
● Introducing continuous integration● Theres no fixed recipe (your setup & team)● Get the build automated. Build the whole system with a single command. On-demand.● Introduce some automated testing into you build. Identify major areas. Start doing.● Try to speed up the commit build. Magic 10m.● Begin with Continuous Integration from the beginning for a new project.● Get some help.
●Final thoughts● Continuous Integration has become a mainstream technique for software development.● Many teams using CI report that the advantages of CI well outweigh the disadvantages.● The effect of finding and fixing integration bugs early in the development process saves both time and money over the lifespan of a project.
Final thoughts● when unit tests fail or a bug emerges, developers might ● initial setup time revert the codebase back to a bug-free state, without wasting required time debugging ● well-developed● developers detect and fix integration problems continuously - test-suite required avoiding last-minute chaos at release dates to achieve automated testing● early warning of broken/incompatible code advantages● early warning of conflicting changes ● large-scale● immediate unit testing of all changes refactoring can be● constant availability of a "current" build for testing, demo, or troublesome due to continuously release purposes changing code● immediate feedback to developers on the quality, base functionality, or system-wide impact of code they are writing ● hardware costs● frequent code check-in pushes developers to create modular, for build less complex code machines can be● metrics generated from automated testing and CI focus significant developers on developing functional, quality code, and help develop momentum in a team
● Continuous integration tools Most popular:● Apache Continuum — continuous integration server supporting Apache Maven and Apache Ant. Supports CVS, Subversion, Ant, Maven, and shell scripts● Hudson — MIT-licensed, written in Java, runs in servlet container, supports CVS, Subversion, Mercurial , Git, StarTeam, Clearcase, Ant, NAnt, Maven, and shell scripts● CruiseControl — Java-based framework for a continuous build process● CruiseControl.NET — .NET-based automated continuous integration server
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Linq;
using BrickPort.Services.Queries;
namespace BrickPort.Infrastructure.Services.InMemory
{
public class InMemoryDataStore
{
private readonly List<string> _validColors;
private readonly List<GameSummary> _games;
private readonly Dictionary<string, string> _playerNames;
private readonly Dictionary<string, string> _playerIds;
public InMemoryDataStore()
{
_games = CreateGames();
_validColors = new List<string> { "Blue", "Red", "Orange", "White", "Brown", "Green" };
_playerIds = new Dictionary<string, string>();
_playerNames = new Dictionary<string, string>();
var players = _games.SelectMany(game => game.PlayerScores.Select(x => new Player()
{
PlayerId = x.PlayerId,
PlayerName = x.PlayerName
})).GroupBy(x => x.PlayerId).Select(x => x.First());
foreach(var player in players)
{
_playerIds[player.PlayerId] = player.PlayerName;
_playerNames[player.PlayerName] = player.PlayerId;
}
}
public IReadOnlyCollection<string> ValidColors => _validColors;
public IReadOnlyCollection<GameSummary> Games => _games;
public IReadOnlyCollection<(string, string)> Players => _playerIds.Select(x => (x.Key, x.Value)).ToList();
public void AddNewGame(GameSummary gameSummary) => _games.Add(gameSummary);
public string GetPlayerName(string playerId)
{
if (!_playerIds.ContainsKey(playerId))
return null;
//throw new KeyNotFoundException($"Could not locate player with id {playerId}");
return _playerIds[playerId];
}
public string GetPlayerId(string playerName)
{
if (!_playerNames.ContainsKey(playerName))
return null;
//throw new KeyNotFoundException($"Could not locate player with name {playerName}");
return _playerNames[playerName];
}
public string AddNewPlayer(string playerName)
{
if (string.IsNullOrWhiteSpace(playerName))
throw new ArgumentNullException(nameof(playerName), "Player name must be provided");
if (_playerNames.ContainsKey(playerName))
throw new ArgumentException($"Player with name {playerName} already exists");
var playerId = Guid.NewGuid().ToString();
_playerIds[playerId] = playerName;
_playerNames[playerName] = playerId;
return playerId;
}
private List<GameSummary> CreateGames()
{
var drewBrees = (PlayerId: Guid.NewGuid().ToString(), PlayerName: "Drew Brees");
var thomasMorstead = (PlayerId: Guid.NewGuid().ToString(), PlayerName: "Thomas Morstead");
var pierreThomas = (PlayerId: Guid.NewGuid().ToString(), PlayerName: "Pierre Thomas");
var marquesColston = (PlayerId: Guid.NewGuid().ToString(), PlayerName: "Marques Colston");
var reggieBush = (PlayerId: Guid.NewGuid().ToString(), PlayerName: "Reggie Bush");
return new List<GameSummary>
{
new GameSummary()
{
Id = Guid.NewGuid().ToString(),
DateUtc = DateTime.UtcNow,
Winner = drewBrees.PlayerName,
InProgress = false,
PlayerScores = new PlayerScoreSummary[]
{
new PlayerScoreSummary()
{
PlayerId = drewBrees.PlayerId,
PlayerName = drewBrees.PlayerName,
Color = "Blue",
VictoryPoints = 10
},
new PlayerScoreSummary()
{
PlayerId = marquesColston.PlayerId,
PlayerName = marquesColston.PlayerName,
Color = "Orange",
VictoryPoints = 9
},
new PlayerScoreSummary()
{
PlayerId = pierreThomas.PlayerId,
PlayerName = pierreThomas.PlayerName,
Color = "White",
VictoryPoints = 7
},
new PlayerScoreSummary()
{
PlayerId = thomasMorstead.PlayerId,
PlayerName = thomasMorstead.PlayerName,
Color = "Red",
VictoryPoints = 6
}
}
},
new GameSummary()
{
Id = Guid.NewGuid().ToString(),
DateUtc = DateTime.UtcNow.AddDays(1),
Winner = pierreThomas.PlayerName,
InProgress = false,
PlayerScores = new PlayerScoreSummary[]
{
new PlayerScoreSummary()
{
PlayerId = pierreThomas.PlayerId,
PlayerName = pierreThomas.PlayerName,
Color = "White",
VictoryPoints = 10
},
new PlayerScoreSummary()
{
PlayerId = drewBrees.PlayerId,
PlayerName = drewBrees.PlayerName,
Color = "Blue",
VictoryPoints = 8
},
new PlayerScoreSummary()
{
PlayerId = reggieBush.PlayerId,
PlayerName = reggieBush.PlayerName,
Color = "Orange",
VictoryPoints = 7
}
}
}
};
}
}
}
|
STACK_EDU
|