Quantcast
Channel: LabVIEW topics
Viewing all 67328 articles
Browse latest View live

Set Cursor Position

$
0
0

Is it posible to set cursor position in the background of the LabView`s mainframe


Stored Procedures how Can I set the OUTPUT and Execute

$
0
0

Hello Guys,

 

I am trying to make a Stored Procedure goes throught the NI TestStand or Labvie but no Sucess. For labview I am having the following error message:

NI_Database_API.lvlib:Cmd Execute.vi->SQL SP Return_Test.vi<ERR>ADO Error: 0x80040E21
Exception occured in Microsoft OLE DB Provider for SQL Server: Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. in NI_Database_API.lvlib:Rec Create - Command.vi->NI_Database_API.lvlib:Cmd Execute.vi->SQL SP Return_Test.vi

 

And When I tried the NI TestSandard I am geeting the following response form the step Get Results:

Error getting data from column "@OutMsgTest". The column '@OutMsgTest' was not returned from the SQL statement.

Error getting data from column "@OutMsgPrint". The column '@OutMsgPrint' was not returned from the SQL statement.

 

SQL (Stored Procedures)

USE[TSTFNLT8]

GO

DECLARE@return_valueint,

@OutMsgTestvarchar(100),

@OutMsgPrintvarchar(100)

EXEC@return_value=[dbo].[sproc_App_LookupTestData]

@FobSN=N'1415199D',--B83FC39C

@FOBPN=N'227587-139',

@OutMsgTest=@OutMsgTestOUTPUT,

@OutMsgPrint=@OutMsgPrintOUTPUT

SELECT@OutMsgTestasN'@OutMsgTest',

@OutMsgPrintasN'@OutMsgPrint'

SELECT'Return Value'=@return_value

GO

 

I have attached both vi and the sequence.

I would aprreciate your help.

Marcelo

 

IMAQ image rescale to 50% by averaging 2x2 ?

$
0
0

I want to resize an image to 50% so it has half the original X and Y dimensions. I want each output pixel to be the average of an input 2x2 pixel block. If I use IMAQ Resample, as I understand it, it simply downsamples, by copying a single pixel out of the 2x2 region with no averaging. I think I could do it if I use IMAQ Extract 2 four different times, with X step and Y step size = 2 and with the Optional Rectangle upper-left corner set to (0,0), (0,1), (1,0), (1,1) respectively, and then divide each pixel value by 4, and add the four images together for the final 2x2 averaged-pixel image.

 

I need real-time (30 fps) performance and I suspect that operation will be inefficient unless implemented at a low level. Is there a better way to do this?

 

The original color camera image is 2048 x 2048, and I want 1024 x 1024.

Problem installing DAQmx Base on Labview 2016 for Linux (UBUNTU 16.04 64bit)

$
0
0

Dear all,

I've just installed successfully labview 2016 on my PC with UBUNTU 16.04 (64bits)

 

Now I'm struggeling with the installation of DAQmx Base 15.0.

In particular I get this error messages:

 

rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    /bin/sh is needed by labview-2015-rte-15.0.0-2.i386
    labview-2015-rte <= 15.0.0 is obsoleted by (installed) labview-2015-rte-32bit-15.0.1-4.i386
rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    /bin/sh is needed by nidaqmxbase-common-15.0.0-f1.x86_64
rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    nidaqmxbase-common >= 15.0.0 is needed by nidaqmxbase-board-support-15.0.0-f1.x86_64
    /bin/sh is needed by nidaqmxbase-board-support-15.0.0-f1.x86_64
rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    /bin/sh is needed by labview-2015-rte-15.0.0-2.i386
    labview-2015-rte <= 15.0.0 is obsoleted by (installed) labview-2015-rte-32bit-15.0.1-4.i386
rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    nidaqmxbase-board-support >= 15.0.0 is needed by nidaqmxbase-cinterface-15.0.0-f1.x86_64
    nidaqmxbase-common >= 15.0.0 is needed by nidaqmxbase-cinterface-15.0.0-f1.x86_64
    /bin/sh is needed by nidaqmxbase-cinterface-15.0.0-f1.x86_64
rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    /bin/sh is needed by labview-2015-rte-15.0.0-2.i386
    labview-2015-rte <= 15.0.0 is obsoleted by (installed) labview-2015-rte-32bit-15.0.1-4.i386
rpm: RPM should not be used directly install RPM packages, use Alien instead!
rpm: However assuming you know what you are doing...
error: Failed dependencies:
    nidaqmxbase-common >= 15.0.0 is needed by nidaqmxbase-usb-support-15.0.0-f1.x86_64
    /bin/sh is needed by nidaqmxbase-usb-support-15.0.0-f1.x86_64

 

I'm a bit confusing if the DAQmx Base 15.0 version is too old for labview 2016.

Anyway is the most recent on NI repository.

 

Shall I have to wait a newer version of DAQmx Base that fits with labview 2016?

 

Thanks in advance

 

 

How's My Coding?

$
0
0

Hey all, I'm currently trying to get better with LabVIEW and was reciently given the following task to test my skills:

 

1. Have a button on the screen that when clicked on will generate a random number between 1 and 100.
2. If the number is odd, display a message on the screen that lists the number and the string “Odd Number” next to it.
3. If the number is even, log the number to a text file by saving the number and the string “Even Number” next to it.
4. In either the odd or even case, if the user clicks the button to generate another random number, append the new number and string to the previous.

Currently I believe my code does what's needed and I think I've used all the best practicies/optomizations I'm aware of. However, I was wondering if there was anything I could do to improve on the code or anything I missed.

where can I find NationalInstruments.Common.Native.dll

$
0
0

Hello!

 

I'm trying to deploy a project that uses NiDaqMx and in order to do that I need NationalInstruments.Common.Native.dll, but when I install NiDaqMx from (http://search.ni.com/nisearch/app/main/p/bot/no/ap/tech/lang/sv/pg/1/sn/catnav:du,n8:3478.41.181.5495,ssnav:ndr/) I do not get this dll that I need.

I get:

NationalInstruments.NiLmClientDLL

NationalInstruments.MStudioCLM

NationalInstruments.DAQmx

NationalInstruments.DAQmx.ComponentModel

NationalInstruments.Common

 

So my question is, how do I aquiere the NationalInstruments.Common.Native dll?

 

It says in http://www.ni.com/tutorial/52522/en/#toc3 that it should be included in NationalInstruments.DAQmx..

Customizable Labview Interface

$
0
0

Hello,

any suggestions on how to make this kind of interface for data acquisition module?
It should ask me how many channels I want to select, then what kind od view do I want (waveform/real-time values or RMS values/or both with waveforms and numeric values), and then the interface should split screen into for example 2 sections if I want to have waveform on one side, and numeric value on other side.

 

Any ideas?

Open/Close FPGA reference turns digital outputs on&of

$
0
0

I have a cRIO 9024 controlelr on a 9114 chassis with digital output module NI 9401. 

 

As shown in the first picture, when I use this code in the host VI for some reason all the digital outputs on the NI9401 go to high and then quickly go low. 

 

It's the same at the end of the code when I use the Close FPGA Vi Reference

 

Now if I change it to the second picture, there's no low-to high- to low when I begin running the Host VI. But still the same problem when closing the FPGA reference

 

I'm wondering if I can stop this from hapenning. 

 

Cheers

 

FPGAFailure1.PNG

FPGAFailure2.PNG


Labview Data Writing Logic Structure

$
0
0

I am currently running code that takes data from a DAQ assistant at 1kHz and collects 1000 samples at a time in a while loop.  It takes the channels samples and converts it to arrays, averages the calues in each channels array, and then puts the outputs into an output file.  So once a second I am getting one value for each channel and saving it.  I would like to set it up so every ten minutes or so it will also record one to two seconds of non averaged data and output it in a seperate file. I was curious if anyone could offer insight into setting up a logic structure that could accomplish this.

 

Best Regards

Determining number of physical channels on 3rd party hardware

$
0
0

Hello all,

I'm working on a program for measuring analog and thermocouple inputs on a USB-1616HS/AI-EXP48 from Measurement Computing. Overall we've been very pleased with the sampling rate and flexibility, but there's a desired feature that's been giving me trouble. I'd like an operator to be able to set up physical channel assignments from the front panel (in the 2 physical channels selectors), and then perform some operations based on number of channels. For example, operator selects whether each channel is disabled, a TC input, or one of several different transducer models with 0-5V output. Column headings in my recording file would then be created automatically at startup, pressure readings from raw V readings would be calculated, and also a quick check that the same physical channels are not assigned as both AI and TC inputs (currently possible with 2 different channel selectors).

 

Current options:

1) Flatten to string both channel selections, then analyze for both number and which unique channels are used by both. Check for overlap before any sampling occurs, and make column headers in sequence. 

 

2) Use static starting positions (i.e. start at exactly channel 0 up to 15 inclusive for TC, then exactly 16 up to 31 for AI), selected by operator starting at 2 specific channels and populating consecutively. Select from premade constants stored to physical channels input.

 

3) Build channel assignments string piecemeal from enum entries per channel, then create IO channels input by unflattening that string somehow. Not 100% sure this is possible honestly.

 

4) Use property node measurement, as shown here.

Not sure that would work since DAQmx isn't used with an MCC daq box.

 

Adding to the complication, I want operator input names for channels (TC and transducer position names), and also the abililty to save and load setup profiles. Hopefully I've explained my situation adequately. Any advice would be appreciated before I plunge forward with option 2.

 

Thanks,
Mark

Transmitting the payload hex bytes as is to a excel sheet as it converts to dec

$
0
0

Attached is the modified NI-XNET subVI - CAN Frame to Table. Transferring the hex data stream to a file (.txt, .xcl) causes the stream to convert to decimal. How can it be saved in its original hex format? That is the issue. 

how do I connect a temperature sensor to a DAQ ?

$
0
0

DAQ: USB-6002

Temp Sensor: Tmp36

 

My USB-6002 just came in the mail and I am trying to use it through labview. I want to use a tmp36 sensor with my DAQ. Can I simply just connect the GND pin to "AI GND" , V+ to "AI 0" , and Vout to "AI 4" ?

 

tmp36pinout.pngusbDAQinout.png

 

Is there a way to manually draw a unique waveform in a graphics window, or to modify existing block commands to do so?

$
0
0

Fair warning, I am asking this question before commiting to LabVIEW and therefore have no code to enter. I'd like to see if this feature is possible before I buy anything.

 

I am working on a raspberry pi project that will take audio input through a USB ADC, allow the user to build a chain of block diagrams describing digital effects, and apply those effects before outputing through a DAC. LabVIEW appears to be an excellent and relatively easier program to perform most of these tasks.

 

However one key feature I would like to include is to allow the user to draw (this will be a touchscreen interface) a waveform on a graph, and then operating on similar principles to a vocoder, shape the input signal into that designed form. Does LabVIEW have a tool in which a user can draw a signal shape in such a way? Or can an existing block tool be modified through code to do so?

Pinging a pool of hosts at the same time

$
0
0

I would like to ping a range of hosts (169.254.1.2 to 169.254.1.13) at the same time. If a certain number of them respond (less than or equal to the number of the hosts in the range), then I would like to stop the pings and have an array of the responding IPs returned.

 

As the hosts respond to pings, we can mark them as alive and stop pinging them.

 

I was thinking of starting with a pool of all the valid host IPs, pass that pool into a VI that evaluates them over a short period of time like a few seconds, and then remove the known good hosts from the pool before sending it through for another round. Once the number of desired good hosts has been reached, we can stop.

 

Any thoughts on how to accomplish this? I have a VI that does the pinging, but I'm not sure how to do the pinging in parallel for an arbitrary number of hosts.

labview arduino uno bad transfer of data

$
0
0

Hello

I am a newbie , and in my project ( a temperature datalog wireless using arduino and xbee- I don't use lifa interface, just get the packet via Labview standard serial VI).

 

A weird thing happens when the some numbers from arduino/xbee arrive in labview, the Vi doesn't show some numbers : 

the number 125 and 126 , for instance ( and many others)..isn't not random...is always the same numbers...

 

this is the way that i send the packet

int32_t l;

int16_t j;

payload[1] = ( ((l >> 24) & 0x000000FF));
payload[2] = ( ((l >> 16) & 0x000000FF));
payload[3] = ( ((l >> 8) & 0x000000FF));
payload[4] = ( ((l >> 0) & 0x000000FF));
payload[5] = (j >> 8) & 0x00FF;
payload[6] = j >> 0 & 0x00FF;

 

where l = temperatur from thermocouple ( *100)

and j is the ambient temperature(*100)

 

in Labview I use join the numbers and convert (cast) to sgl  and j to int16.( /100 to get the decimals)

 

I believe is some wrong way that I'm putting the bits in the packet...or the conversion in labview.

 

I really search a lot in internet, and saw some guys with this problem with the magicals 125 and 126 numbers!!!!!

 

If somebody could get me a idea what is it???...I really appreciate... I've researched a lot in Google....I've tried strings...litle endian...etc..etc...in each way some numbers missing.... ...I've tried add a random number like 6000 in the value...I just move the problem, 125 and 126 show...but ( for instance  208 and 209 don't).....It is weird ( for me , at leat....)

 

( excuse my bad English..I'm from Brazil)

 

thanks


require support regarding anaolg daq

$
0
0

We are using cRIO 9063 controller with Analog module 9201(RSE) at Cummins,Pune.

As we acquire voltage signal(0V to 10V) from HBM torque transducer T40B across Pin 1(white)as GND and Pin 4(Red)as AI signal.

At normal condition sensor output gives stable voltage 0.017v on measuring fluke multimeter and fluke data logger, but in NI system it gives fluctuation from -0.021v to 0.040v.

Talking about prices applications developed in labview

$
0
0

Hello to all.

I think this is an interesting topic and it isn't easy to find out on the internet.

I would like to know (more or less) what is the price that a company pay to another for an applications developed in labview.

I am new in that and I have to develop a product and I don't know which price I should ask for.

Apart of hours worked, gas etc etc I would like to know the market price more or less of these applications.

My application I have to develop is an artificial vision application which will use "Vision Adquisition Software" and "Module Vision Toolkit".

I am asking for (more or lees) about the market price of the software (not the hardware).

Thanks a lot

"VISA: (Hex 0xBFFF0015) Timeout expired before operation completed."

$
0
0

When I run the main VI (lia vid mono), I get an error in the visa read in both the subvi (reset the monochromator and set wavelength)  "VISA:  (Hex 0xBFFF0015) Timeout expired before operation completed." But this error occurs only in the FIRST 2,3 ITERATION of the while loop (however the reset subvi runs only once). When I press the continue button  (when the error message pops up), then this error does not occur for the rest of the remaining iterations of the while loop and I am able to record the correct values from the lock-in-amplifiers.

 

Also when I run both these SubVi individually I now (  initially running those subVI, I didn't get any error) get the same error. 

 

I have attached the command codes for the monochromator and the manual for lock-in amplifier SR830.

Thanks in Advance!!!

 

 

How to set sampling rate in analog data myRIO?

$
0
0

Hi,

 

I'm Adebah, I just want to know how can I set sampling rate in analog input myRIO?

It is because if I using DAQ, I able to set the sampling rate and rate Hz at analog input before running the program.

 

Kindly need help to solve this problem. 

 

Thanks,

Adebah

FPGA Sine Wave Generator - how to control the amplitude?

$
0
0

What is a good way to programmatically control the output amplitude of the FPGA Sine Wave Generator, before sending it to the physical AO?

I guess I should set full scale amplitude from the block diagram of the Sine Wave Generator and rescale it before sending in to the output. I use I16 for the sine output, which matches the AO connector input.

In my application I'd like to be able to rescale the sin amplitude from 0V to 10V with a fine grain, so not only dividing by 2.

Thanks

Viewing all 67328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>