Here are the features of the slice editor : A simple example is described in this article : Getting Started You can see the slice as the input-mapping, and the perspective transform as the output-mapping. This is called a slice, and you can split your layer in many slices, so you could map specific parts of image differently. Indeed, instead of using the whole image, you can choose to use only a small part of your media. More options can be found in the Mapping tab in the properties-view :Įspecially the button Slice Editor to edit the portion of your image that will be used for the mapping (shortcut is CMD+J) : If you need to remove any mapping on your layer, click the reset mapping button in the properties-view. Select the mapping tool in Millumin to do so (or press M) :Īs you can see, this perserves the perspective :Īlso, this creates a constraint on your layer : any media played by this layer will be displayed in this mapping (whatever ratio the media have). This is the main technic for videomapping : moving the corner of your image until it matches your surface on stage, while preserving the perspective. All the technics for this purposes are called videomapping. I don’t know if i’M gonna express myself clearly, but i’ll do my best.Īctually I’ve got a multiple features request in one.Īs an interactive designer (i’ve got only a little background in interactive/immersive installation design and development and in game/level design… and also a little bit of web and app UX), I’m looking for the best tools for my projects.Īt school I learned UDK, Max/MSP/Jitter ( ) and a little bit of Quartz Composer.When working with video, it is often required to correctly adjust the projection so it matches the stage set. What I am looking for is an engine that will allow me to create a videogame but instead of being exported for a console, PC, Mac or mobile, etc… I want it to be projected in an environment/space. I want to do Video game design and development mixed with Video Mapping like the softwares Resolume ( ), Madmapper ( ) or Millumin does, if I have to pass through a third party software like wyphon, syphon ( ) or spout ( ) I’ll do it, but i’ll prefer not. I would want to develop in one tool, to eliminate the chance of bugs and mistakes. I want to be able to work in a UI that permits me to build an entire project in one tool, like Unreal Engine with a lot of Max 7 or Touch Designer features with it ( ).Ĭonsidering that Immersive Studios works with Unreal Engine as technology for theirs works ( ), I was wondering if anyone in the community or from Epic that could help, inform me or say tome that Epic or third parties are working on new features and plug-ins that will allow more experimentals or artistics projects that implement gaming. It will be nice to see Unreal become some kind of multitask or “multi” medias engine, like for LEDs interactive installation, arduino and electronics art or games, Internet of things. Here’s how I’d tackle this problem potentially and I don’t think something like this would be hard at all. I personally think it’d be pretty easy to do but that all depends on how you want to implement the end-result for the user. More so WRT if they are more or less wearing any sort of tracking devices or IR sensors. If you want the user to wear nothing, then the solutions are still very possible, but much more involved. Since you can render the environment and elements you’d need inside a game, demo or application, its all about how to handle the projection when outputting that through a projector instead of a monitor. I spent many years working with the Kinect v1/v2 and if you use them in a multi-Kinect array where you combine the data from multiple sensors, you can stabilize the head movement pretty good. One other way is using a hacked version Johnny Lee’s implementation Head Tracking for Desktop VR Displays using the WiiRemote - YouTube (which I’m sure led to many companies today using IR to track device positions post Nintendo’s Wii). I’ve even ported this to XNA in C# a while back and its pretty straight forward code but this would be a very cheap and reliable option. You could pick up a track IR or one of those options if they have a pretty open API to get the data. Then its just having a some form custom UCameraComponent where you can feed that data inside the engine.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |