Media Server :: Prevent The Illegal Distribution Of Video And POST Data?
Oct 31, 2011
setPublishPassword and setPostingPassword ?I do not quite understand, do not use an example.I set him groupspecWithOutAuthorizations, but was illegally set to groupspecWithAuthorizations how to do.
I've noticed that when I use NetConnection.call(methodName) in my client Actionscript it sends a POST request off to the server. I thought this was odd because requests for data are normally GET operations but I didn't worry about it because it doesn't impact my applicationow I have our sysadmin complaining about the amount of data I'm shifting over POST requests because POST requests aren't cached by the web servers (and we are working with the kind of traffic levels where that matters.) Is there anyway I can make call() use GET?The code I'm using is all fairly standard, but here is the relevant snippit:
var nc:NetConnection = new NetConnection(); nc.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler); nc.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
I have just installed FMS. As fast as I see, there is a Vod application, sending stream from static files. We need to have FMS send a live stream, but from a ready made file.
Is there an existing application doing this already? Or does this would require programming a new app on our end? Is what I'm trying to do even possible with FMS?
I'm trying to make a software which sends video and audio data to a flash media server by using RTMP protocol. Currently, my program can communicate with a flash media server correctly. RTMP specifications does not describe about the raw data in video/audio messages, so I muxed raw H.264 and AAC data into video/audio messages and sent to the server. The server seems to accept them, but a video player cannot playback the stream sending from the server. The player just says "Loading..." For a test purpose, I sniffed the network packets between Wirecast and the flash media server and ripped off only video and audio data. Then, I muxed those data into video/audio message and sent to the flash media server. In this case, the video player connected to the server can playback the stream correctly.
I checked the stream sent from Wirecast, the stream seems not to be H.264 raw data because those data are not started from 0x17 instead of H.264 start code. With those situation, I am wondering what kind of container format I should use for H.264/AAC data to the flash media server.
I'm having a problem with Adobe Flash Media Server 3.5.4 (Developer) default install ldd` showed me some missing libs, but I was able to overcome that server` runs fine, but I can't run `fmsadmin I'm using Debian 5.0.6 on VPS Here is some debug info I was able to get:
srv:/opt/adobe/fms# gdb fmsadmin GNU gdb 6.8-debian Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later [URL].. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details.This GDB was configured as "i486-linux-gnu" (no debugging symbols found)
I like to know what is the best Linux distribution for professional use of FMS (streaming events on the web, HD, 50 to 100 viewers). Debian Lenny/Squeeze, CentOS, RedHat?
I'm completely new to FMS but a seasoned Flex developer. We're building a Flash Builder 4.5 application which will be deployed to iPads. The basic functional requirements are as follows:
1. Presenter in a room has a camera streaming video of themselves2. Participants in the same room have an iPad - there are 30 participants and the stream is sent to all 30 iPads simultaneously.3. There must be lowest delay possible with good quality video - we understand this can be achieved with RTMP and that VP6 has to be used (as there is no H.264 support at present in AIR on iOS)4. The presenter will trigger certain things from their iPad which will then display certain alerts, content on the partcipants iPad (from server connection the participant iPads are listening to)
Firstly, I'm interested to know your thoughts on this current spec. This was put together from zero initial experience with FMS and having read a lot online and creating a proof-of-concept. Secondly, with regards to the two-way interaction: Is this something FMS handles? Previously I have used LCDS or BlazeDS for the data events & streaming aspects of our applications. Initially I was planning on using BlazeDS again, in addition to FMS. But from what I have read FMS possible handles data push too? Does FMS have LCDS/Blaze built in in some way?
I do noticed that any user who have the ip address of my FMS can do connect and stream to the server although there is no user for him and the server listens to the stream already and it appears in the active streams. How can I prevent that and only allow authorized users by my to connect to FMS and prevent all other from using it ?
I've installed free FMS developper version on a vista OS, using the embeded Apache server. I added PHP 5.13 and everything seems to work except that : when I try to manage HTML forms, using the 'post' method, the result is a prompt windows, asking me for downloading the php file (corresponding to that pointed in the 'action' attribute. It acts as if Apache did not recognize the type php. My httpd.conf includes everyting to make php works fine. The clue is, when I change the form 'method' to 'GET', the php script works well! My FMS is configured to tunnel the HTTP request, listening to ports 80 and 1935 ant proxying HTTP to port 8134 (defaults) When I override this tunneling, by requesting the php file from my web browser directly to port 8134, it works fine too ! Now, I know that the problem comes from FMS and HTTP tunneling, but I have no idea how to solve it...
when i try to live stream with FMS! I can stream video with Flash media live encoder to the server but when i create the player to recieve the livestream from server,i can not recieve the live stream,can anyone give me a step by step tutorial of how to do it?
I've installed Flash Media Server and send stream to it use Flash Media Live Encoder via [URL]. And on my website, I've embed code to play this live stream via [URL]. All ok!
But, any user can install Flash Media Live Encoder and connect to my FMS, publish his/her stream (because url to publish and view is same) My question is: how to prevent end-user publish stream to my FMS, only allow end-user view my live stream?
I have been using streaming Flash video but just found out that it can't be seen in many places because Flash streams over ports 1935, 443, and 80, which are often closed by firewall. Progressive only needs port 443 so it's more likely that it will come through firewall.
I'm running FMS 3.5 on RedHat, and have some server side actionscript that is attempting to perform an HTTP GET on a remote URL, e.g:
loadVars.send("http://someurl/");
This works on a Windows XP development server, but not on the RedHat deployment server. Are there any configuration options that I need to be aware of in order to allow my application to perform requests such as these (i.e. HTTP requests to remote servers)?
i have a recorded dvr stream, using the dvrcast sample application. of course, i couldn't play it or use it in video post processing tools so i tried the f4v post processor fms tool.unfortunately, both f4v post processor and flvcheck tools throw error and exit immediately
Flash media server 3.5 to stream live video to a webpage with flex GUI embedded in it.For this I have used Adobe flash media server start screen .I could able to stream video correctly.I want to know how long it could be do like this.ie suppose I have to stream like this for 3 or 6 months
I'm having a very frustrating problem with FMS. A stream recorded on the server-side won't play until I restart the server.
The user flow I'm currently working with is:
1. record webcam stream
2. close the stream
3. call FMS function to post process the stream
4. FMS joins the recorded webcam stream with two pre-recorded videos and stores as a new stream
Then, knowing the new stream name, I'm trying to play it. However, I'm always seeing its last frame and it doesn't play. Surprisingly, restarting the server and re-connecting to it helps, then I can play the stream.
So I wonder if (server-side):
var stream = Stream.get("streamName"); stream.record(); stream.play(...); stream.play(...); stream.play(...);
As titled, what is the way to record video/audio files using Flash Meida Server through rmtp, and allow users to access the recorded files through http?What I am trying to do, is to record a user's microphone's input and save it to the server.fterwards, I would like other users to be able to access the recorded files and mainuplating the audio data, by computeSpectrum(), to do some visualization of the audio. As I know computeSpectrum() cannot work on streaming files, so I think I need to access the recorded files using http instead of rmtp. Is that true?
I want to have a website where users can record short video clips (using a webcam) and then have those videos saved on the FMS for viewing later. I've seen a lot about streaming video, but I actually want to save the video.
The auth_addin works as expected when using the Flash Media Live encoder requiring a username and password before publishing a stream.However, tools like Wirecast and even the sample broadcasting SWF that comes in the FMS Introduction page can publish a stream without providing any credentials!
I am new to this company and my employer who knows I haven't worked with Flash asked me to look into making a flash video for new software that he developed. I have looked at the Adobe website but am not sure where to start. Can I just download a editor, which one? Do I need something with the editor to make it work